Test Report: Docker_Linux_crio 21772

                    
                      32e66bacf90aad56df50495b30e504a3036ca148:2025-10-26:42070
                    
                

Test fail (37/326)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 13.88
36 TestAddons/parallel/RegistryCreds 0.42
37 TestAddons/parallel/Ingress 146.49
38 TestAddons/parallel/InspektorGadget 5.31
39 TestAddons/parallel/MetricsServer 5.33
41 TestAddons/parallel/CSI 50.32
42 TestAddons/parallel/Headlamp 2.54
43 TestAddons/parallel/CloudSpanner 5.26
44 TestAddons/parallel/LocalPath 10.15
45 TestAddons/parallel/NvidiaDevicePlugin 5.32
46 TestAddons/parallel/Yakd 6.25
47 TestAddons/parallel/AmdGpuDevicePlugin 5.25
97 TestFunctional/parallel/ServiceCmdConnect 602.88
114 TestFunctional/parallel/ServiceCmd/DeployApp 600.64
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.97
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.23
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 4.01
132 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.64
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.22
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.43
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.54
153 TestFunctional/parallel/ServiceCmd/Format 0.55
154 TestFunctional/parallel/ServiceCmd/URL 0.54
190 TestJSONOutput/pause/Command 2.27
196 TestJSONOutput/unpause/Command 1.8
260 TestPause/serial/Pause 6.03
299 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.26
304 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.29
307 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.41
316 TestStartStop/group/old-k8s-version/serial/Pause 6.88
322 TestStartStop/group/no-preload/serial/Pause 6.89
329 TestStartStop/group/embed-certs/serial/Pause 6.01
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.65
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.26
344 TestStartStop/group/newest-cni/serial/Pause 6.94
358 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.47
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable volcano --alsologtostderr -v=1: exit status 11 (249.086815ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:49:58.212698   22613 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:49:58.213044   22613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:49:58.213059   22613 out.go:374] Setting ErrFile to fd 2...
	I1026 07:49:58.213067   22613 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:49:58.213311   22613 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:49:58.213576   22613 mustload.go:65] Loading cluster: addons-610291
	I1026 07:49:58.213920   22613 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:49:58.213935   22613 addons.go:606] checking whether the cluster is paused
	I1026 07:49:58.214022   22613 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:49:58.214039   22613 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:49:58.214399   22613 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:49:58.232662   22613 ssh_runner.go:195] Run: systemctl --version
	I1026 07:49:58.232709   22613 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:49:58.249910   22613 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:49:58.347949   22613 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:49:58.348035   22613 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:49:58.377280   22613 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:49:58.377314   22613 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:49:58.377319   22613 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:49:58.377323   22613 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:49:58.377325   22613 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:49:58.377329   22613 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:49:58.377331   22613 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:49:58.377334   22613 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:49:58.377336   22613 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:49:58.377350   22613 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:49:58.377355   22613 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:49:58.377360   22613 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:49:58.377369   22613 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:49:58.377373   22613 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:49:58.377380   22613 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:49:58.377393   22613 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:49:58.377400   22613 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:49:58.377405   22613 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:49:58.377407   22613 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:49:58.377410   22613 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:49:58.377412   22613 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:49:58.377415   22613 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:49:58.377418   22613 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:49:58.377420   22613 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:49:58.377425   22613 cri.go:89] found id: ""
	I1026 07:49:58.377479   22613 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:49:58.391676   22613 out.go:203] 
	W1026 07:49:58.392858   22613 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:49:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:49:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:49:58.392883   22613 out.go:285] * 
	* 
	W1026 07:49:58.395835   22613 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:49:58.397110   22613 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.460028ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-9xvr4" [15f4eef6-d42e-43fa-8958-437758150119] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003888836s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-xgtqv" [5365db61-16ee-452b-9ccc-eaf42f532ce7] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003273282s
addons_test.go:392: (dbg) Run:  kubectl --context addons-610291 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-610291 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-610291 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.319117247s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 ip
2025/10/26 07:50:20 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable registry --alsologtostderr -v=1: exit status 11 (310.73295ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:20.842305   24514 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:20.842630   24514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:20.842645   24514 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:20.842651   24514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:20.842984   24514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:20.843398   24514 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:20.843914   24514 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:20.843937   24514 addons.go:606] checking whether the cluster is paused
	I1026 07:50:20.844078   24514 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:20.844104   24514 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:20.844712   24514 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:20.868932   24514 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:20.869001   24514 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:20.892196   24514 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:21.003873   24514 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:21.003972   24514 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:21.050766   24514 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:21.050790   24514 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:21.050797   24514 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:21.050802   24514 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:21.050806   24514 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:21.050812   24514 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:21.050817   24514 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:21.050822   24514 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:21.050826   24514 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:21.050840   24514 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:21.050848   24514 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:21.050852   24514 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:21.050857   24514 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:21.050866   24514 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:21.050870   24514 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:21.050876   24514 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:21.050892   24514 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:21.050898   24514 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:21.050902   24514 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:21.050906   24514 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:21.050911   24514 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:21.050915   24514 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:21.050919   24514 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:21.050924   24514 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:21.050928   24514 cri.go:89] found id: ""
	I1026 07:50:21.050972   24514 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:21.071233   24514 out.go:203] 
	W1026 07:50:21.073885   24514 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:21.073907   24514 out.go:285] * 
	* 
	W1026 07:50:21.079202   24514 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:21.081010   24514 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.88s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 2.900275ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-610291
addons_test.go:332: (dbg) Run:  kubectl --context addons-610291 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (246.507034ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:29.153924   26061 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:29.154226   26061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:29.154236   26061 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:29.154243   26061 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:29.154456   26061 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:29.154727   26061 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:29.155080   26061 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:29.155104   26061 addons.go:606] checking whether the cluster is paused
	I1026 07:50:29.155205   26061 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:29.155226   26061 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:29.155603   26061 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:29.173536   26061 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:29.173600   26061 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:29.192181   26061 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:29.292851   26061 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:29.292934   26061 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:29.321564   26061 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:29.321587   26061 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:29.321592   26061 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:29.321597   26061 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:29.321601   26061 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:29.321605   26061 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:29.321608   26061 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:29.321612   26061 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:29.321616   26061 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:29.321626   26061 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:29.321631   26061 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:29.321636   26061 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:29.321639   26061 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:29.321644   26061 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:29.321648   26061 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:29.321664   26061 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:29.321672   26061 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:29.321679   26061 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:29.321683   26061 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:29.321686   26061 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:29.321690   26061 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:29.321696   26061 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:29.321701   26061 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:29.321709   26061 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:29.321714   26061 cri.go:89] found id: ""
	I1026 07:50:29.321767   26061 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:29.337322   26061 out.go:203] 
	W1026 07:50:29.338485   26061 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:29.338503   26061 out.go:285] * 
	* 
	W1026 07:50:29.341493   26061 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:29.342959   26061 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.42s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (146.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-610291 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-610291 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-610291 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [be19000a-0cb3-47df-952e-23cb447757b4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [be19000a-0cb3-47df-952e-23cb447757b4] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.003127382s
I1026 07:50:30.563911   12921 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m13.7901672s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-610291 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-610291
helpers_test.go:243: (dbg) docker inspect addons-610291:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111",
	        "Created": "2025-10-26T07:47:45.843572466Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T07:47:45.87708592Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111/hostname",
	        "HostsPath": "/var/lib/docker/containers/709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111/hosts",
	        "LogPath": "/var/lib/docker/containers/709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111/709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111-json.log",
	        "Name": "/addons-610291",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-610291:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-610291",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111",
	                "LowerDir": "/var/lib/docker/overlay2/0ccd18ff4f865e14ae158aa6fe24098029a52bf722dfd5dad0e63afaa339bba4-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0ccd18ff4f865e14ae158aa6fe24098029a52bf722dfd5dad0e63afaa339bba4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0ccd18ff4f865e14ae158aa6fe24098029a52bf722dfd5dad0e63afaa339bba4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0ccd18ff4f865e14ae158aa6fe24098029a52bf722dfd5dad0e63afaa339bba4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-610291",
	                "Source": "/var/lib/docker/volumes/addons-610291/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-610291",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-610291",
	                "name.minikube.sigs.k8s.io": "addons-610291",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a7c7949bbcf1f10ee54d165f96e838f5624bf03cdc69b2f5246e545b1740dc8",
	            "SandboxKey": "/var/run/docker/netns/5a7c7949bbcf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-610291": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:30:8d:e9:f2:22",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b22a427e8709e846cfa922d1f5e5433a05ebedc13e9c92c84d3699672c9349c",
	                    "EndpointID": "bf64b31b3bc6b0103283a9ae71065d9e07ab27c3ea6e5c4119e195f6aafed183",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-610291",
	                        "709e79e538aa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-610291 -n addons-610291
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-610291 logs -n 25: (1.176389338s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-916619 --alsologtostderr --binary-mirror http://127.0.0.1:36125 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-916619 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ delete  │ -p binary-mirror-916619                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-916619 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ addons  │ enable dashboard -p addons-610291                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ addons  │ disable dashboard -p addons-610291                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ start   │ -p addons-610291 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:49 UTC │
	│ addons  │ addons-610291 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:49 UTC │                     │
	│ addons  │ addons-610291 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ addons  │ enable headlamp -p addons-610291 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ addons  │ addons-610291 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ addons  │ addons-610291 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ addons  │ addons-610291 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ addons  │ addons-610291 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ addons  │ addons-610291 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ ip      │ addons-610291 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │ 26 Oct 25 07:50 UTC │
	│ addons  │ addons-610291 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ addons  │ addons-610291 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ ssh     │ addons-610291 ssh cat /opt/local-path-provisioner/pvc-8572f8bb-02cc-4c0a-8349-02180884ca24_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │ 26 Oct 25 07:50 UTC │
	│ addons  │ addons-610291 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-610291                                                                                                                                                                                                                                                                                                                                                                                           │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │ 26 Oct 25 07:50 UTC │
	│ addons  │ addons-610291 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ ssh     │ addons-610291 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ addons  │ addons-610291 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ addons  │ addons-610291 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ addons  │ addons-610291 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ ip      │ addons-610291 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-610291        │ jenkins │ v1.37.0 │ 26 Oct 25 07:52 UTC │ 26 Oct 25 07:52 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 07:47:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 07:47:21.279595   14247 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:47:21.279703   14247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:21.279712   14247 out.go:374] Setting ErrFile to fd 2...
	I1026 07:47:21.279715   14247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:21.279905   14247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:47:21.280428   14247 out.go:368] Setting JSON to false
	I1026 07:47:21.281204   14247 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1792,"bootTime":1761463049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 07:47:21.281306   14247 start.go:141] virtualization: kvm guest
	I1026 07:47:21.283307   14247 out.go:179] * [addons-610291] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 07:47:21.284958   14247 notify.go:220] Checking for updates...
	I1026 07:47:21.284979   14247 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 07:47:21.286474   14247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 07:47:21.287828   14247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 07:47:21.289486   14247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 07:47:21.290677   14247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 07:47:21.291791   14247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 07:47:21.293219   14247 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 07:47:21.316288   14247 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 07:47:21.316387   14247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 07:47:21.374931   14247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-26 07:47:21.365489879 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 07:47:21.375037   14247 docker.go:318] overlay module found
	I1026 07:47:21.376723   14247 out.go:179] * Using the docker driver based on user configuration
	I1026 07:47:21.377857   14247 start.go:305] selected driver: docker
	I1026 07:47:21.377873   14247 start.go:925] validating driver "docker" against <nil>
	I1026 07:47:21.377882   14247 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 07:47:21.378451   14247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 07:47:21.429528   14247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-26 07:47:21.420550362 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 07:47:21.429672   14247 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 07:47:21.429859   14247 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 07:47:21.431628   14247 out.go:179] * Using Docker driver with root privileges
	I1026 07:47:21.432809   14247 cni.go:84] Creating CNI manager for ""
	I1026 07:47:21.432879   14247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 07:47:21.432893   14247 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 07:47:21.432957   14247 start.go:349] cluster config:
	{Name:addons-610291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-610291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1026 07:47:21.434263   14247 out.go:179] * Starting "addons-610291" primary control-plane node in "addons-610291" cluster
	I1026 07:47:21.435379   14247 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 07:47:21.436511   14247 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 07:47:21.437649   14247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 07:47:21.437691   14247 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 07:47:21.437717   14247 cache.go:58] Caching tarball of preloaded images
	I1026 07:47:21.437771   14247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 07:47:21.437791   14247 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 07:47:21.437802   14247 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 07:47:21.438151   14247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/config.json ...
	I1026 07:47:21.438175   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/config.json: {Name:mkcca355575390147054e49c3b0ee0e3923d5755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:21.453391   14247 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 07:47:21.453510   14247 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1026 07:47:21.453530   14247 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1026 07:47:21.453538   14247 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1026 07:47:21.453548   14247 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1026 07:47:21.453558   14247 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1026 07:47:34.093122   14247 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1026 07:47:34.093150   14247 cache.go:232] Successfully downloaded all kic artifacts
	I1026 07:47:34.093192   14247 start.go:360] acquireMachinesLock for addons-610291: {Name:mk5ae23e2a114127e4eb4fc97f79aafc5ce2edba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 07:47:34.093321   14247 start.go:364] duration metric: took 108.763µs to acquireMachinesLock for "addons-610291"
	I1026 07:47:34.093353   14247 start.go:93] Provisioning new machine with config: &{Name:addons-610291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-610291 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 07:47:34.093418   14247 start.go:125] createHost starting for "" (driver="docker")
	I1026 07:47:34.095319   14247 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1026 07:47:34.095615   14247 start.go:159] libmachine.API.Create for "addons-610291" (driver="docker")
	I1026 07:47:34.095654   14247 client.go:168] LocalClient.Create starting
	I1026 07:47:34.095777   14247 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem
	I1026 07:47:34.237140   14247 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem
	I1026 07:47:34.558729   14247 cli_runner.go:164] Run: docker network inspect addons-610291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 07:47:34.575242   14247 cli_runner.go:211] docker network inspect addons-610291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 07:47:34.575326   14247 network_create.go:284] running [docker network inspect addons-610291] to gather additional debugging logs...
	I1026 07:47:34.575350   14247 cli_runner.go:164] Run: docker network inspect addons-610291
	W1026 07:47:34.590663   14247 cli_runner.go:211] docker network inspect addons-610291 returned with exit code 1
	I1026 07:47:34.590692   14247 network_create.go:287] error running [docker network inspect addons-610291]: docker network inspect addons-610291: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-610291 not found
	I1026 07:47:34.590705   14247 network_create.go:289] output of [docker network inspect addons-610291]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-610291 not found
	
	** /stderr **
	I1026 07:47:34.590824   14247 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 07:47:34.606988   14247 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017792a0}
	I1026 07:47:34.607059   14247 network_create.go:124] attempt to create docker network addons-610291 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1026 07:47:34.607102   14247 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-610291 addons-610291
	I1026 07:47:34.660608   14247 network_create.go:108] docker network addons-610291 192.168.49.0/24 created
	I1026 07:47:34.660638   14247 kic.go:121] calculated static IP "192.168.49.2" for the "addons-610291" container
	I1026 07:47:34.660719   14247 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 07:47:34.677195   14247 cli_runner.go:164] Run: docker volume create addons-610291 --label name.minikube.sigs.k8s.io=addons-610291 --label created_by.minikube.sigs.k8s.io=true
	I1026 07:47:34.694118   14247 oci.go:103] Successfully created a docker volume addons-610291
	I1026 07:47:34.694185   14247 cli_runner.go:164] Run: docker run --rm --name addons-610291-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-610291 --entrypoint /usr/bin/test -v addons-610291:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 07:47:41.497732   14247 cli_runner.go:217] Completed: docker run --rm --name addons-610291-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-610291 --entrypoint /usr/bin/test -v addons-610291:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.803503345s)
	I1026 07:47:41.497766   14247 oci.go:107] Successfully prepared a docker volume addons-610291
	I1026 07:47:41.497794   14247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 07:47:41.497811   14247 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 07:47:41.497891   14247 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-610291:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 07:47:45.772375   14247 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-610291:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.274420193s)
	I1026 07:47:45.772403   14247 kic.go:203] duration metric: took 4.274587495s to extract preloaded images to volume ...
	W1026 07:47:45.772499   14247 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 07:47:45.772539   14247 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 07:47:45.772593   14247 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 07:47:45.828381   14247 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-610291 --name addons-610291 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-610291 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-610291 --network addons-610291 --ip 192.168.49.2 --volume addons-610291:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 07:47:46.123437   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Running}}
	I1026 07:47:46.141900   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:47:46.160299   14247 cli_runner.go:164] Run: docker exec addons-610291 stat /var/lib/dpkg/alternatives/iptables
	I1026 07:47:46.205212   14247 oci.go:144] the created container "addons-610291" has a running status.
	I1026 07:47:46.205238   14247 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa...
	I1026 07:47:46.616196   14247 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 07:47:46.642748   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:47:46.661100   14247 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 07:47:46.661123   14247 kic_runner.go:114] Args: [docker exec --privileged addons-610291 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 07:47:46.706187   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:47:46.723342   14247 machine.go:93] provisionDockerMachine start ...
	I1026 07:47:46.723434   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:46.741573   14247 main.go:141] libmachine: Using SSH client type: native
	I1026 07:47:46.741823   14247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1026 07:47:46.741839   14247 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 07:47:46.881944   14247 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-610291
	
	I1026 07:47:46.881978   14247 ubuntu.go:182] provisioning hostname "addons-610291"
	I1026 07:47:46.882052   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:46.899186   14247 main.go:141] libmachine: Using SSH client type: native
	I1026 07:47:46.899425   14247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1026 07:47:46.899442   14247 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-610291 && echo "addons-610291" | sudo tee /etc/hostname
	I1026 07:47:47.046199   14247 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-610291
	
	I1026 07:47:47.046289   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:47.064409   14247 main.go:141] libmachine: Using SSH client type: native
	I1026 07:47:47.064668   14247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1026 07:47:47.064693   14247 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-610291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-610291/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-610291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 07:47:47.202657   14247 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 07:47:47.202689   14247 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 07:47:47.202734   14247 ubuntu.go:190] setting up certificates
	I1026 07:47:47.202749   14247 provision.go:84] configureAuth start
	I1026 07:47:47.202807   14247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-610291
	I1026 07:47:47.220449   14247 provision.go:143] copyHostCerts
	I1026 07:47:47.220511   14247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 07:47:47.220619   14247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 07:47:47.220678   14247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 07:47:47.220728   14247 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.addons-610291 san=[127.0.0.1 192.168.49.2 addons-610291 localhost minikube]
	I1026 07:47:47.401519   14247 provision.go:177] copyRemoteCerts
	I1026 07:47:47.401570   14247 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 07:47:47.401600   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:47.418631   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:47:47.517204   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 07:47:47.534807   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 07:47:47.550881   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 07:47:47.567695   14247 provision.go:87] duration metric: took 364.932184ms to configureAuth
	I1026 07:47:47.567718   14247 ubuntu.go:206] setting minikube options for container-runtime
	I1026 07:47:47.567852   14247 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:47:47.567936   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:47.585451   14247 main.go:141] libmachine: Using SSH client type: native
	I1026 07:47:47.585688   14247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1026 07:47:47.585714   14247 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 07:47:47.833685   14247 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 07:47:47.833706   14247 machine.go:96] duration metric: took 1.110341315s to provisionDockerMachine
	I1026 07:47:47.833716   14247 client.go:171] duration metric: took 13.738051438s to LocalClient.Create
	I1026 07:47:47.833735   14247 start.go:167] duration metric: took 13.738119331s to libmachine.API.Create "addons-610291"
	I1026 07:47:47.833744   14247 start.go:293] postStartSetup for "addons-610291" (driver="docker")
	I1026 07:47:47.833756   14247 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 07:47:47.833810   14247 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 07:47:47.833858   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:47.851692   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:47:47.952937   14247 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 07:47:47.956352   14247 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 07:47:47.956376   14247 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 07:47:47.956386   14247 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 07:47:47.956444   14247 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 07:47:47.956471   14247 start.go:296] duration metric: took 122.720964ms for postStartSetup
	I1026 07:47:47.956761   14247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-610291
	I1026 07:47:47.973604   14247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/config.json ...
	I1026 07:47:47.973852   14247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 07:47:47.973892   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:47.990955   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:47:48.086398   14247 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 07:47:48.090727   14247 start.go:128] duration metric: took 13.997297631s to createHost
	I1026 07:47:48.090748   14247 start.go:83] releasing machines lock for "addons-610291", held for 13.997410793s
	I1026 07:47:48.090801   14247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-610291
	I1026 07:47:48.109673   14247 ssh_runner.go:195] Run: cat /version.json
	I1026 07:47:48.109731   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:48.109760   14247 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 07:47:48.109826   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:48.127933   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:47:48.128615   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:47:48.279079   14247 ssh_runner.go:195] Run: systemctl --version
	I1026 07:47:48.285415   14247 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 07:47:48.319345   14247 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 07:47:48.323943   14247 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 07:47:48.324002   14247 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 07:47:48.349760   14247 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 07:47:48.349791   14247 start.go:495] detecting cgroup driver to use...
	I1026 07:47:48.349818   14247 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 07:47:48.349864   14247 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 07:47:48.365050   14247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 07:47:48.377154   14247 docker.go:218] disabling cri-docker service (if available) ...
	I1026 07:47:48.377209   14247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 07:47:48.392886   14247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 07:47:48.409920   14247 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 07:47:48.488743   14247 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 07:47:48.575380   14247 docker.go:234] disabling docker service ...
	I1026 07:47:48.575454   14247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 07:47:48.593462   14247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 07:47:48.606110   14247 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 07:47:48.687992   14247 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 07:47:48.770818   14247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 07:47:48.783098   14247 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 07:47:48.797105   14247 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 07:47:48.797154   14247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.806959   14247 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 07:47:48.807013   14247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.815303   14247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.823310   14247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.831403   14247 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 07:47:48.839256   14247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.847534   14247 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.860404   14247 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.868925   14247 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 07:47:48.875694   14247 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 07:47:48.875737   14247 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 07:47:48.887148   14247 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 07:47:48.894200   14247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 07:47:48.971031   14247 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 07:47:49.074879   14247 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 07:47:49.074947   14247 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 07:47:49.078665   14247 start.go:563] Will wait 60s for crictl version
	I1026 07:47:49.078732   14247 ssh_runner.go:195] Run: which crictl
	I1026 07:47:49.082169   14247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 07:47:49.106368   14247 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 07:47:49.106502   14247 ssh_runner.go:195] Run: crio --version
	I1026 07:47:49.133506   14247 ssh_runner.go:195] Run: crio --version
	I1026 07:47:49.163483   14247 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 07:47:49.164522   14247 cli_runner.go:164] Run: docker network inspect addons-610291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 07:47:49.181305   14247 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 07:47:49.185230   14247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 07:47:49.195135   14247 kubeadm.go:883] updating cluster {Name:addons-610291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-610291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 07:47:49.195225   14247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 07:47:49.195284   14247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 07:47:49.223719   14247 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 07:47:49.223738   14247 crio.go:433] Images already preloaded, skipping extraction
	I1026 07:47:49.223781   14247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 07:47:49.246859   14247 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 07:47:49.246879   14247 cache_images.go:85] Images are preloaded, skipping loading
	I1026 07:47:49.246885   14247 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1026 07:47:49.246960   14247 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-610291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-610291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 07:47:49.247017   14247 ssh_runner.go:195] Run: crio config
	I1026 07:47:49.291433   14247 cni.go:84] Creating CNI manager for ""
	I1026 07:47:49.291455   14247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 07:47:49.291479   14247 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 07:47:49.291507   14247 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-610291 NodeName:addons-610291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 07:47:49.291653   14247 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-610291"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 07:47:49.291728   14247 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 07:47:49.299600   14247 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 07:47:49.299662   14247 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 07:47:49.306882   14247 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 07:47:49.318904   14247 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 07:47:49.334378   14247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1026 07:47:49.347269   14247 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 07:47:49.351037   14247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 07:47:49.360898   14247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 07:47:49.440802   14247 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 07:47:49.465707   14247 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291 for IP: 192.168.49.2
	I1026 07:47:49.465725   14247 certs.go:195] generating shared ca certs ...
	I1026 07:47:49.465739   14247 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:49.465844   14247 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 07:47:49.751724   14247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt ...
	I1026 07:47:49.751756   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt: {Name:mk22a5729f47ea6d5d732bc99ea3bee5794d62ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:49.751925   14247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key ...
	I1026 07:47:49.751936   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key: {Name:mkee1e95054c760f9f30ea61b9e625b3b8c7e485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:49.752025   14247 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 07:47:50.151821   14247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt ...
	I1026 07:47:50.151849   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt: {Name:mkf2594a4b511b04a346ce370fe4d575bea18e03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.152020   14247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key ...
	I1026 07:47:50.152032   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key: {Name:mk609c4d5e45bb36cc12f3827342395af5d820f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.152103   14247 certs.go:257] generating profile certs ...
	I1026 07:47:50.152158   14247 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.key
	I1026 07:47:50.152173   14247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt with IP's: []
	I1026 07:47:50.215686   14247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt ...
	I1026 07:47:50.215714   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: {Name:mkecc3d3e94268147dd2d8cdbd70e447ff58bc5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.215866   14247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.key ...
	I1026 07:47:50.215880   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.key: {Name:mkd578ae209befaa9b0d8558f5ed038dd7e81266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.215952   14247 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.key.546045b5
	I1026 07:47:50.215972   14247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.crt.546045b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1026 07:47:50.432125   14247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.crt.546045b5 ...
	I1026 07:47:50.432153   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.crt.546045b5: {Name:mk4d9a750d8ada4e8e008c2c1ddad70a2f3e0625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.432318   14247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.key.546045b5 ...
	I1026 07:47:50.432333   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.key.546045b5: {Name:mkf99c16a95038b7b0dfaebb9b18bcf2232ea333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.432405   14247 certs.go:382] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.crt.546045b5 -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.crt
	I1026 07:47:50.432475   14247 certs.go:386] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.key.546045b5 -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.key
	I1026 07:47:50.432524   14247 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.key
	I1026 07:47:50.432541   14247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.crt with IP's: []
	I1026 07:47:50.746921   14247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.crt ...
	I1026 07:47:50.746951   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.crt: {Name:mkbbdf7bed8f765e54fad832da39c8a295138c7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.747111   14247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.key ...
	I1026 07:47:50.747122   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.key: {Name:mk5b33d8803f9c5929454310a9ea4a5e1c8050aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.747297   14247 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 07:47:50.747332   14247 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 07:47:50.747361   14247 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 07:47:50.747383   14247 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 07:47:50.747953   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 07:47:50.765823   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 07:47:50.782551   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 07:47:50.799363   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 07:47:50.815695   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 07:47:50.831965   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 07:47:50.848439   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 07:47:50.864675   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 07:47:50.880826   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 07:47:50.899167   14247 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 07:47:50.911131   14247 ssh_runner.go:195] Run: openssl version
	I1026 07:47:50.916961   14247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 07:47:50.928040   14247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 07:47:50.932127   14247 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 07:47:50.932180   14247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 07:47:50.970989   14247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 07:47:50.979521   14247 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 07:47:50.982986   14247 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 07:47:50.983037   14247 kubeadm.go:400] StartCluster: {Name:addons-610291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-610291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 07:47:50.983120   14247 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:47:50.983176   14247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:47:51.008856   14247 cri.go:89] found id: ""
	I1026 07:47:51.008913   14247 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 07:47:51.017006   14247 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 07:47:51.024683   14247 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 07:47:51.024740   14247 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 07:47:51.032392   14247 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 07:47:51.032419   14247 kubeadm.go:157] found existing configuration files:
	
	I1026 07:47:51.032461   14247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 07:47:51.040194   14247 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 07:47:51.040236   14247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 07:47:51.047357   14247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 07:47:51.054704   14247 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 07:47:51.054747   14247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 07:47:51.061735   14247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 07:47:51.068928   14247 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 07:47:51.068991   14247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 07:47:51.075673   14247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 07:47:51.082719   14247 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 07:47:51.082776   14247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 07:47:51.089481   14247 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 07:47:51.144127   14247 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 07:47:51.198085   14247 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 07:48:00.765490   14247 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 07:48:00.765570   14247 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 07:48:00.765694   14247 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 07:48:00.765767   14247 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 07:48:00.765811   14247 kubeadm.go:318] OS: Linux
	I1026 07:48:00.765850   14247 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 07:48:00.765889   14247 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 07:48:00.765929   14247 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 07:48:00.765968   14247 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 07:48:00.766021   14247 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 07:48:00.766103   14247 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 07:48:00.766186   14247 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 07:48:00.766283   14247 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 07:48:00.766401   14247 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 07:48:00.766534   14247 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 07:48:00.766673   14247 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 07:48:00.766761   14247 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 07:48:00.768613   14247 out.go:252]   - Generating certificates and keys ...
	I1026 07:48:00.768688   14247 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 07:48:00.768779   14247 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 07:48:00.768861   14247 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 07:48:00.768932   14247 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 07:48:00.768994   14247 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 07:48:00.769055   14247 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 07:48:00.769120   14247 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 07:48:00.769266   14247 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-610291 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 07:48:00.769342   14247 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 07:48:00.769470   14247 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-610291 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 07:48:00.769563   14247 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 07:48:00.769649   14247 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 07:48:00.769713   14247 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 07:48:00.769798   14247 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 07:48:00.769850   14247 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 07:48:00.769912   14247 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 07:48:00.769958   14247 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 07:48:00.770022   14247 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 07:48:00.770101   14247 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 07:48:00.770212   14247 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 07:48:00.770317   14247 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 07:48:00.771814   14247 out.go:252]   - Booting up control plane ...
	I1026 07:48:00.771890   14247 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 07:48:00.771985   14247 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 07:48:00.772079   14247 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 07:48:00.772199   14247 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 07:48:00.772335   14247 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 07:48:00.772446   14247 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 07:48:00.772521   14247 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 07:48:00.772555   14247 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 07:48:00.772685   14247 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 07:48:00.772783   14247 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 07:48:00.772846   14247 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001888489s
	I1026 07:48:00.772933   14247 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 07:48:00.773001   14247 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1026 07:48:00.773076   14247 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 07:48:00.773151   14247 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 07:48:00.773216   14247 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.533951099s
	I1026 07:48:00.773310   14247 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.92003129s
	I1026 07:48:00.773411   14247 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501436204s
	I1026 07:48:00.773524   14247 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 07:48:00.773648   14247 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 07:48:00.773730   14247 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 07:48:00.773921   14247 kubeadm.go:318] [mark-control-plane] Marking the node addons-610291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 07:48:00.773987   14247 kubeadm.go:318] [bootstrap-token] Using token: aa1fmf.q9mlltjnhg1c496f
	I1026 07:48:00.775507   14247 out.go:252]   - Configuring RBAC rules ...
	I1026 07:48:00.775605   14247 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 07:48:00.775699   14247 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 07:48:00.775839   14247 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 07:48:00.775955   14247 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 07:48:00.776088   14247 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 07:48:00.776176   14247 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 07:48:00.776307   14247 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 07:48:00.776347   14247 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 07:48:00.776387   14247 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 07:48:00.776393   14247 kubeadm.go:318] 
	I1026 07:48:00.776457   14247 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 07:48:00.776463   14247 kubeadm.go:318] 
	I1026 07:48:00.776550   14247 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 07:48:00.776562   14247 kubeadm.go:318] 
	I1026 07:48:00.776596   14247 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 07:48:00.776677   14247 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 07:48:00.776741   14247 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 07:48:00.776749   14247 kubeadm.go:318] 
	I1026 07:48:00.776793   14247 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 07:48:00.776801   14247 kubeadm.go:318] 
	I1026 07:48:00.776840   14247 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 07:48:00.776846   14247 kubeadm.go:318] 
	I1026 07:48:00.776889   14247 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 07:48:00.776954   14247 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 07:48:00.777012   14247 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 07:48:00.777027   14247 kubeadm.go:318] 
	I1026 07:48:00.777123   14247 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 07:48:00.777193   14247 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 07:48:00.777199   14247 kubeadm.go:318] 
	I1026 07:48:00.777304   14247 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token aa1fmf.q9mlltjnhg1c496f \
	I1026 07:48:00.777419   14247 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d \
	I1026 07:48:00.777441   14247 kubeadm.go:318] 	--control-plane 
	I1026 07:48:00.777445   14247 kubeadm.go:318] 
	I1026 07:48:00.777537   14247 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 07:48:00.777548   14247 kubeadm.go:318] 
	I1026 07:48:00.777677   14247 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token aa1fmf.q9mlltjnhg1c496f \
	I1026 07:48:00.777909   14247 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d 
	I1026 07:48:00.777926   14247 cni.go:84] Creating CNI manager for ""
	I1026 07:48:00.777932   14247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 07:48:00.779366   14247 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 07:48:00.780670   14247 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 07:48:00.784746   14247 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 07:48:00.784765   14247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 07:48:00.797362   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 07:48:00.992886   14247 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 07:48:00.992957   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:00.993025   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-610291 minikube.k8s.io/updated_at=2025_10_26T07_48_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=addons-610291 minikube.k8s.io/primary=true
	I1026 07:48:01.069083   14247 ops.go:34] apiserver oom_adj: -16
	I1026 07:48:01.069221   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:01.570185   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:02.069962   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:02.569328   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:03.069477   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:03.569520   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:04.069713   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:04.570045   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:05.070328   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:05.132624   14247 kubeadm.go:1113] duration metric: took 4.139722629s to wait for elevateKubeSystemPrivileges
	I1026 07:48:05.132657   14247 kubeadm.go:402] duration metric: took 14.149622166s to StartCluster
	I1026 07:48:05.132672   14247 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:05.132801   14247 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 07:48:05.133393   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:05.133646   14247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 07:48:05.133657   14247 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 07:48:05.133676   14247 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 07:48:05.133816   14247 addons.go:69] Setting default-storageclass=true in profile "addons-610291"
	I1026 07:48:05.133820   14247 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-610291"
	I1026 07:48:05.133833   14247 addons.go:69] Setting volcano=true in profile "addons-610291"
	I1026 07:48:05.133836   14247 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-610291"
	I1026 07:48:05.133844   14247 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-610291"
	I1026 07:48:05.133844   14247 addons.go:69] Setting registry-creds=true in profile "addons-610291"
	I1026 07:48:05.133866   14247 addons.go:69] Setting storage-provisioner=true in profile "addons-610291"
	I1026 07:48:05.133871   14247 addons.go:238] Setting addon registry-creds=true in "addons-610291"
	I1026 07:48:05.133873   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.133891   14247 addons.go:238] Setting addon storage-provisioner=true in "addons-610291"
	I1026 07:48:05.133898   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.133910   14247 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:48:05.133912   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.133792   14247 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-610291"
	I1026 07:48:05.134130   14247 addons.go:69] Setting gcp-auth=true in profile "addons-610291"
	I1026 07:48:05.134161   14247 mustload.go:65] Loading cluster: addons-610291
	I1026 07:48:05.134175   14247 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-610291"
	I1026 07:48:05.134207   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.134221   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.134363   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.134386   14247 addons.go:69] Setting inspektor-gadget=true in profile "addons-610291"
	I1026 07:48:05.134419   14247 addons.go:69] Setting ingress-dns=true in profile "addons-610291"
	I1026 07:48:05.134433   14247 addons.go:238] Setting addon ingress-dns=true in "addons-610291"
	I1026 07:48:05.134438   14247 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:48:05.134448   14247 addons.go:69] Setting metrics-server=true in profile "addons-610291"
	I1026 07:48:05.134450   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.134460   14247 addons.go:238] Setting addon metrics-server=true in "addons-610291"
	I1026 07:48:05.134483   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.134498   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.134718   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.133800   14247 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-610291"
	I1026 07:48:05.134742   14247 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-610291"
	I1026 07:48:05.134769   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.134957   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.134993   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.134720   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.133848   14247 addons.go:238] Setting addon volcano=true in "addons-610291"
	I1026 07:48:05.136016   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.134441   14247 addons.go:238] Setting addon inspektor-gadget=true in "addons-610291"
	I1026 07:48:05.136209   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.134411   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.136782   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.136786   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.133831   14247 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-610291"
	I1026 07:48:05.139429   14247 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-610291"
	I1026 07:48:05.139867   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.133804   14247 addons.go:69] Setting cloud-spanner=true in profile "addons-610291"
	I1026 07:48:05.140407   14247 addons.go:238] Setting addon cloud-spanner=true in "addons-610291"
	I1026 07:48:05.140445   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.133812   14247 addons.go:69] Setting registry=true in profile "addons-610291"
	I1026 07:48:05.140674   14247 addons.go:238] Setting addon registry=true in "addons-610291"
	I1026 07:48:05.140735   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.133856   14247 addons.go:69] Setting volumesnapshots=true in profile "addons-610291"
	I1026 07:48:05.141097   14247 addons.go:238] Setting addon volumesnapshots=true in "addons-610291"
	I1026 07:48:05.141130   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.141598   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.141859   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.133812   14247 addons.go:69] Setting ingress=true in profile "addons-610291"
	I1026 07:48:05.143278   14247 out.go:179] * Verifying Kubernetes components...
	I1026 07:48:05.133801   14247 addons.go:69] Setting yakd=true in profile "addons-610291"
	I1026 07:48:05.143388   14247 addons.go:238] Setting addon yakd=true in "addons-610291"
	I1026 07:48:05.143322   14247 addons.go:238] Setting addon ingress=true in "addons-610291"
	I1026 07:48:05.143416   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.143458   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.144063   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.144071   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.147572   14247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 07:48:05.149505   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.151309   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.188368   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.202026   14247 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1026 07:48:05.202113   14247 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1026 07:48:05.203741   14247 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 07:48:05.203768   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1026 07:48:05.203783   14247 addons.go:238] Setting addon default-storageclass=true in "addons-610291"
	I1026 07:48:05.203822   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.203963   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.203992   14247 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1026 07:48:05.204055   14247 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 07:48:05.204074   14247 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 07:48:05.204131   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.205215   14247 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 07:48:05.205236   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 07:48:05.205293   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.205296   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.217623   14247 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-610291"
	I1026 07:48:05.217696   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.218366   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.218830   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 07:48:05.220265   14247 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 07:48:05.220280   14247 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 07:48:05.220354   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.220454   14247 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1026 07:48:05.221843   14247 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1026 07:48:05.222628   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 07:48:05.222721   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.223599   14247 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 07:48:05.228183   14247 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 07:48:05.228199   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 07:48:05.228265   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.233921   14247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1026 07:48:05.234022   14247 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	W1026 07:48:05.235160   14247 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 07:48:05.235839   14247 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 07:48:05.235857   14247 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1026 07:48:05.235958   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.236793   14247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 07:48:05.238448   14247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 07:48:05.239730   14247 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 07:48:05.239744   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 07:48:05.239793   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.239940   14247 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 07:48:05.243323   14247 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 07:48:05.243343   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 07:48:05.243393   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.249907   14247 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 07:48:05.250428   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 07:48:05.251102   14247 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1026 07:48:05.251106   14247 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 07:48:05.251122   14247 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 07:48:05.251172   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.253541   14247 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 07:48:05.253728   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1026 07:48:05.253799   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.257226   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 07:48:05.258729   14247 out.go:179]   - Using image docker.io/registry:3.0.0
	I1026 07:48:05.260280   14247 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1026 07:48:05.260481   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.261502   14247 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 07:48:05.261820   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 07:48:05.262050   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.262092   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 07:48:05.263819   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 07:48:05.266837   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 07:48:05.268703   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 07:48:05.271072   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 07:48:05.273011   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 07:48:05.273288   14247 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 07:48:05.273653   14247 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 07:48:05.273765   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.274411   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 07:48:05.274429   14247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 07:48:05.274585   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.275619   14247 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 07:48:05.277022   14247 out.go:179]   - Using image docker.io/busybox:stable
	I1026 07:48:05.278415   14247 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 07:48:05.278430   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 07:48:05.278479   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.290985   14247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 07:48:05.291948   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.292590   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.321835   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.327089   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.327794   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.329122   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.329421   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.334404   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.334489   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.334843   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.342759   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.343471   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.344621   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.346235   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	W1026 07:48:05.353365   14247 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 07:48:05.353398   14247 retry.go:31] will retry after 162.155925ms: ssh: handshake failed: EOF
	I1026 07:48:05.371901   14247 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 07:48:05.428281   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 07:48:05.461601   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 07:48:05.478365   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 07:48:05.482442   14247 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 07:48:05.482464   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 07:48:05.491559   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 07:48:05.497762   14247 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:05.497793   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1026 07:48:05.509424   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 07:48:05.512534   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 07:48:05.514758   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 07:48:05.517557   14247 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 07:48:05.517580   14247 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 07:48:05.518231   14247 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 07:48:05.518265   14247 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 07:48:05.535306   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 07:48:05.535363   14247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 07:48:05.542779   14247 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 07:48:05.542806   14247 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 07:48:05.552143   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 07:48:05.561883   14247 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 07:48:05.561908   14247 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 07:48:05.563296   14247 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 07:48:05.563317   14247 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 07:48:05.571930   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:05.572836   14247 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 07:48:05.572854   14247 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 07:48:05.596056   14247 node_ready.go:35] waiting up to 6m0s for node "addons-610291" to be "Ready" ...
	I1026 07:48:05.596604   14247 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1026 07:48:05.597754   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 07:48:05.597774   14247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 07:48:05.597846   14247 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 07:48:05.597888   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 07:48:05.599855   14247 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 07:48:05.599935   14247 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 07:48:05.633365   14247 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 07:48:05.633393   14247 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 07:48:05.641086   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 07:48:05.641438   14247 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 07:48:05.641527   14247 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 07:48:05.647658   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 07:48:05.650139   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 07:48:05.650158   14247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 07:48:05.684748   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 07:48:05.684772   14247 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 07:48:05.706070   14247 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 07:48:05.706096   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 07:48:05.721102   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 07:48:05.721194   14247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 07:48:05.724791   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 07:48:05.741192   14247 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 07:48:05.741211   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 07:48:05.761865   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 07:48:05.762221   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 07:48:05.762274   14247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 07:48:05.785817   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 07:48:05.807520   14247 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 07:48:05.807612   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 07:48:05.870530   14247 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 07:48:05.870555   14247 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 07:48:05.924584   14247 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 07:48:05.924663   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 07:48:05.976391   14247 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 07:48:05.976412   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 07:48:06.056616   14247 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 07:48:06.056643   14247 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 07:48:06.100405   14247 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-610291" context rescaled to 1 replicas
	I1026 07:48:06.111604   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 07:48:06.709763   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.194972988s)
	I1026 07:48:06.709802   14247 addons.go:479] Verifying addon ingress=true in "addons-610291"
	I1026 07:48:06.709900   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.157731349s)
	I1026 07:48:06.710697   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.138731209s)
	W1026 07:48:06.710735   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:06.710753   14247 retry.go:31] will retry after 150.812923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:06.710794   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.06955818s)
	I1026 07:48:06.710823   14247 addons.go:479] Verifying addon metrics-server=true in "addons-610291"
	I1026 07:48:06.710880   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.063200762s)
	I1026 07:48:06.710896   14247 addons.go:479] Verifying addon registry=true in "addons-610291"
	I1026 07:48:06.711360   14247 out.go:179] * Verifying ingress addon...
	I1026 07:48:06.712956   14247 out.go:179] * Verifying registry addon...
	I1026 07:48:06.712976   14247 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-610291 service yakd-dashboard -n yakd-dashboard
	
	I1026 07:48:06.713696   14247 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 07:48:06.715240   14247 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 07:48:06.716615   14247 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 07:48:06.716949   14247 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 07:48:06.716965   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:06.862998   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:07.170847   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.384937461s)
	W1026 07:48:07.170892   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 07:48:07.170915   14247 retry.go:31] will retry after 180.789796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 07:48:07.171156   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.059460086s)
	I1026 07:48:07.171191   14247 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-610291"
	I1026 07:48:07.173844   14247 out.go:179] * Verifying csi-hostpath-driver addon...
	I1026 07:48:07.176604   14247 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 07:48:07.180463   14247 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 07:48:07.180483   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:07.281369   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:07.281611   14247 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 07:48:07.281628   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:07.352754   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1026 07:48:07.481456   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:07.481491   14247 retry.go:31] will retry after 231.837783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 07:48:07.599636   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:07.680347   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:07.714349   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:07.716692   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:07.717615   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:08.179451   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:08.217172   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:08.217583   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:08.680560   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:08.717053   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:08.717709   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:09.179873   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:09.280562   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:09.280766   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:09.679527   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:09.717159   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:09.717551   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:09.838101   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.485304365s)
	I1026 07:48:09.838182   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.123799085s)
	W1026 07:48:09.838225   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:09.838257   14247 retry.go:31] will retry after 457.886509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 07:48:10.099158   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:10.179746   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:10.281094   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:10.281245   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:10.297303   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:10.681038   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:10.717359   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:10.717575   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 07:48:10.823399   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:10.823427   14247 retry.go:31] will retry after 1.248439599s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:11.180163   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:11.281282   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:11.281502   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:11.680633   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:11.717219   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:11.717831   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:12.072216   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:12.180576   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:12.281591   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:12.281756   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 07:48:12.593677   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:12.593707   14247 retry.go:31] will retry after 700.854454ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 07:48:12.598951   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:12.679615   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:12.717159   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:12.717833   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:12.799044   14247 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 07:48:12.799112   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:12.816164   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:12.930110   14247 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 07:48:12.942973   14247 addons.go:238] Setting addon gcp-auth=true in "addons-610291"
	I1026 07:48:12.943048   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:12.943588   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:12.962688   14247 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 07:48:12.962735   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:12.979705   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:13.077691   14247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 07:48:13.078980   14247 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 07:48:13.080302   14247 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 07:48:13.080319   14247 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 07:48:13.093496   14247 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 07:48:13.093516   14247 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 07:48:13.106737   14247 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 07:48:13.106755   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 07:48:13.119180   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 07:48:13.179181   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:13.216623   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:13.217475   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:13.294696   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:13.426675   14247 addons.go:479] Verifying addon gcp-auth=true in "addons-610291"
	I1026 07:48:13.428487   14247 out.go:179] * Verifying gcp-auth addon...
	I1026 07:48:13.430984   14247 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 07:48:13.434873   14247 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 07:48:13.434901   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:13.680825   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:13.716743   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:13.717404   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 07:48:13.848454   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:13.848487   14247 retry.go:31] will retry after 2.481579043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:13.933904   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:14.180271   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:14.216699   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:14.217478   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:14.434672   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:14.599113   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:14.679985   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:14.716854   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:14.718423   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:14.934009   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:15.180007   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:15.216614   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:15.217351   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:15.433824   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:15.679789   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:15.717311   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:15.717851   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:15.934819   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:16.180367   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:16.216698   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:16.217415   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:16.330812   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:16.433582   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:16.599424   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:16.679330   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:16.717069   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:16.717352   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 07:48:16.846176   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:16.846218   14247 retry.go:31] will retry after 3.360187984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:16.933591   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:17.179544   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:17.217302   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:17.217744   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:17.434168   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:17.679925   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:17.716384   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:17.718047   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:17.933580   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:18.179482   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:18.217015   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:18.217474   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:18.434106   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:18.679909   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:18.716434   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:18.717789   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:18.934358   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:19.098831   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:19.179358   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:19.216867   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:19.217426   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:19.434416   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:19.679578   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:19.717059   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:19.717615   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:19.934480   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:20.179914   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:20.207025   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:20.217171   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:20.217768   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:20.434007   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:20.680619   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:20.717368   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:20.717793   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 07:48:20.725696   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:20.725721   14247 retry.go:31] will retry after 2.38893853s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:20.934342   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:21.179244   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:21.216742   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:21.217219   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:21.434278   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:21.598727   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:21.678947   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:21.716692   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:21.718195   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:21.933823   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:22.179688   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:22.217244   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:22.217746   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:22.434177   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:22.680207   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:22.716616   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:22.717240   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:22.933701   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:23.115300   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:23.180628   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:23.217124   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:23.217642   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:23.434203   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:23.638456   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:23.638480   14247 retry.go:31] will retry after 4.646816814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:23.679850   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:23.716069   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:23.717644   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:23.934140   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:24.098560   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:24.180556   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:24.217083   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:24.217666   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:24.434140   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:24.679219   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:24.716719   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:24.718193   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:24.933676   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:25.179613   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:25.216512   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:25.218154   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:25.433907   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:25.679885   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:25.716542   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:25.717935   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:25.933405   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:26.098935   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:26.179936   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:26.216563   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:26.218050   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:26.433737   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:26.679408   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:26.716881   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:26.717759   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:26.934370   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:27.179138   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:27.216752   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:27.217234   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:27.433911   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:27.679858   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:27.716129   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:27.717593   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:27.934144   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:28.180327   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:28.216903   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:28.217475   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:28.285654   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:28.434416   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:28.599675   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:28.679851   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:28.716433   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:28.717979   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 07:48:28.804753   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:28.804777   14247 retry.go:31] will retry after 6.113753708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:28.934582   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:29.179439   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:29.216967   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:29.217779   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:29.434457   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:29.679363   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:29.716832   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:29.717530   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:29.933936   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:30.180622   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:30.217039   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:30.217628   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:30.434439   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:30.679457   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:30.717094   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:30.717562   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:30.934651   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:31.099358   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:31.179743   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:31.217213   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:31.217823   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:31.433563   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:31.679684   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:31.717297   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:31.717887   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:31.933529   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:32.179752   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:32.217188   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:32.217795   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:32.434389   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:32.678964   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:32.716598   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:32.717843   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:32.934221   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:33.179350   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:33.216846   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:33.217402   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:33.434165   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:33.598461   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:33.679958   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:33.716748   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:33.718191   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:33.933621   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:34.179327   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:34.216654   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:34.217300   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:34.434006   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:34.680480   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:34.717101   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:34.717658   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:34.918990   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:34.933741   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:35.179708   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:35.216463   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:35.218432   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:35.434328   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:35.444023   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:35.444050   14247 retry.go:31] will retry after 8.889779837s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:35.679924   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:35.716044   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:35.717495   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:35.934020   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:36.098649   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:36.179058   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:36.216728   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:36.218142   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:36.434129   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:36.679421   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:36.716954   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:36.717617   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:36.934072   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:37.179795   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:37.216222   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:37.217578   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:37.433992   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:37.679777   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:37.717451   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:37.718039   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:37.933449   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:38.099306   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:38.179798   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:38.216841   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:38.218345   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:38.433609   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:38.679518   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:38.717068   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:38.717673   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:38.934638   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:39.179558   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:39.216953   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:39.217793   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:39.434588   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:39.679771   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:39.717496   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:39.718061   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:39.933525   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:40.179542   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:40.217197   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:40.217805   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:40.434532   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:40.599125   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:40.679757   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:40.717499   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:40.717860   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:40.934381   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:41.179445   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:41.217050   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:41.217581   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:41.434386   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:41.679425   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:41.717187   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:41.717689   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:41.934212   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:42.179245   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:42.216811   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:42.217411   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:42.433844   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:42.599516   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:42.680030   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:42.716300   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:42.717781   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:42.933818   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:43.179983   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:43.216419   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:43.218161   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:43.434050   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:43.679775   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:43.716231   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:43.717705   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:43.934511   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:44.179734   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:44.217322   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:44.217854   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:44.333981   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:44.433495   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:44.679844   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:44.716666   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:44.718156   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 07:48:44.853470   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:44.853507   14247 retry.go:31] will retry after 25.607623623s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:44.933827   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:45.099394   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:45.179751   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:45.216534   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:45.217974   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:45.434301   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:45.679055   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:45.716589   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:45.718055   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:45.933714   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:46.179558   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:46.217301   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:46.218209   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:46.433894   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:46.679749   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:46.717441   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:46.717805   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:46.934076   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:47.181237   14247 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 07:48:47.181275   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:47.216557   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:47.218528   14247 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 07:48:47.218547   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:47.434456   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:47.599874   14247 node_ready.go:49] node "addons-610291" is "Ready"
	I1026 07:48:47.599909   14247 node_ready.go:38] duration metric: took 42.003820542s for node "addons-610291" to be "Ready" ...
	I1026 07:48:47.599927   14247 api_server.go:52] waiting for apiserver process to appear ...
	I1026 07:48:47.599976   14247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 07:48:47.619028   14247 api_server.go:72] duration metric: took 42.485263909s to wait for apiserver process to appear ...
	I1026 07:48:47.619071   14247 api_server.go:88] waiting for apiserver healthz status ...
	I1026 07:48:47.619095   14247 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 07:48:47.623883   14247 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 07:48:47.624839   14247 api_server.go:141] control plane version: v1.34.1
	I1026 07:48:47.624868   14247 api_server.go:131] duration metric: took 5.788922ms to wait for apiserver health ...
	I1026 07:48:47.624879   14247 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 07:48:47.629271   14247 system_pods.go:59] 20 kube-system pods found
	I1026 07:48:47.629338   14247 system_pods.go:61] "amd-gpu-device-plugin-79j4j" [3de0f744-f685-4002-a0fb-987b69a28eed] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 07:48:47.629355   14247 system_pods.go:61] "coredns-66bc5c9577-dqbbr" [a5360d6a-f7ac-49c8-a38b-de0cbc019ada] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 07:48:47.629366   14247 system_pods.go:61] "csi-hostpath-attacher-0" [427cd88d-7809-4d5c-b742-dc613723c8eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 07:48:47.629378   14247 system_pods.go:61] "csi-hostpath-resizer-0" [5632b492-535d-49fc-b4f4-780142412509] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 07:48:47.629390   14247 system_pods.go:61] "csi-hostpathplugin-nnl9n" [b19e7a2f-2826-4c12-9872-05c7b3daa41a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 07:48:47.629399   14247 system_pods.go:61] "etcd-addons-610291" [e218f6a9-b3e5-47a6-affc-0ced70bf0a2e] Running
	I1026 07:48:47.629405   14247 system_pods.go:61] "kindnet-b4jwg" [29fff50a-3d72-418d-8298-36d257dc9068] Running
	I1026 07:48:47.629414   14247 system_pods.go:61] "kube-apiserver-addons-610291" [9dcb8e97-6fe0-4cb1-9b62-d8193e9965f2] Running
	I1026 07:48:47.629419   14247 system_pods.go:61] "kube-controller-manager-addons-610291" [6e72e4d1-d1f5-45db-a473-17bee208af30] Running
	I1026 07:48:47.629430   14247 system_pods.go:61] "kube-ingress-dns-minikube" [16fe29e4-d3c1-404f-b1f5-d18bcec18f13] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 07:48:47.629436   14247 system_pods.go:61] "kube-proxy-mxqr8" [39564011-18e0-4076-9355-be6c38423d9e] Running
	I1026 07:48:47.629448   14247 system_pods.go:61] "kube-scheduler-addons-610291" [01bf8ae9-291c-4cd1-a1bd-c60d1e1b158e] Running
	I1026 07:48:47.629455   14247 system_pods.go:61] "metrics-server-85b7d694d7-fs7sf" [78b9ec71-29a1-4d28-979c-6a0735900428] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 07:48:47.629468   14247 system_pods.go:61] "nvidia-device-plugin-daemonset-9g5j7" [4b83bdea-b49d-4190-94d1-648aa449cddf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 07:48:47.629480   14247 system_pods.go:61] "registry-6b586f9694-9xvr4" [15f4eef6-d42e-43fa-8958-437758150119] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 07:48:47.629488   14247 system_pods.go:61] "registry-creds-764b6fb674-4mf5m" [5f373a48-52c9-441e-a2db-28351bc83a48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 07:48:47.629496   14247 system_pods.go:61] "registry-proxy-xgtqv" [5365db61-16ee-452b-9ccc-eaf42f532ce7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 07:48:47.629507   14247 system_pods.go:61] "snapshot-controller-7d9fbc56b8-klrbn" [f542d0aa-2574-4ee1-b4e7-f918488c019f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:47.629520   14247 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qx7lp" [7e6af6b6-ad2b-4990-ab5b-aca4b8ac704e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:47.629528   14247 system_pods.go:61] "storage-provisioner" [e20648dc-41b5-404c-86ec-550b4b75c80a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 07:48:47.629542   14247 system_pods.go:74] duration metric: took 4.656727ms to wait for pod list to return data ...
	I1026 07:48:47.629552   14247 default_sa.go:34] waiting for default service account to be created ...
	I1026 07:48:47.631782   14247 default_sa.go:45] found service account: "default"
	I1026 07:48:47.631800   14247 default_sa.go:55] duration metric: took 2.241157ms for default service account to be created ...
	I1026 07:48:47.631810   14247 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 07:48:47.727663   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:47.727708   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:47.727871   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:47.728747   14247 system_pods.go:86] 20 kube-system pods found
	I1026 07:48:47.728771   14247 system_pods.go:89] "amd-gpu-device-plugin-79j4j" [3de0f744-f685-4002-a0fb-987b69a28eed] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 07:48:47.728778   14247 system_pods.go:89] "coredns-66bc5c9577-dqbbr" [a5360d6a-f7ac-49c8-a38b-de0cbc019ada] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 07:48:47.728785   14247 system_pods.go:89] "csi-hostpath-attacher-0" [427cd88d-7809-4d5c-b742-dc613723c8eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 07:48:47.728790   14247 system_pods.go:89] "csi-hostpath-resizer-0" [5632b492-535d-49fc-b4f4-780142412509] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 07:48:47.728796   14247 system_pods.go:89] "csi-hostpathplugin-nnl9n" [b19e7a2f-2826-4c12-9872-05c7b3daa41a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 07:48:47.728799   14247 system_pods.go:89] "etcd-addons-610291" [e218f6a9-b3e5-47a6-affc-0ced70bf0a2e] Running
	I1026 07:48:47.728804   14247 system_pods.go:89] "kindnet-b4jwg" [29fff50a-3d72-418d-8298-36d257dc9068] Running
	I1026 07:48:47.728808   14247 system_pods.go:89] "kube-apiserver-addons-610291" [9dcb8e97-6fe0-4cb1-9b62-d8193e9965f2] Running
	I1026 07:48:47.728811   14247 system_pods.go:89] "kube-controller-manager-addons-610291" [6e72e4d1-d1f5-45db-a473-17bee208af30] Running
	I1026 07:48:47.728818   14247 system_pods.go:89] "kube-ingress-dns-minikube" [16fe29e4-d3c1-404f-b1f5-d18bcec18f13] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 07:48:47.728821   14247 system_pods.go:89] "kube-proxy-mxqr8" [39564011-18e0-4076-9355-be6c38423d9e] Running
	I1026 07:48:47.728825   14247 system_pods.go:89] "kube-scheduler-addons-610291" [01bf8ae9-291c-4cd1-a1bd-c60d1e1b158e] Running
	I1026 07:48:47.728830   14247 system_pods.go:89] "metrics-server-85b7d694d7-fs7sf" [78b9ec71-29a1-4d28-979c-6a0735900428] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 07:48:47.728837   14247 system_pods.go:89] "nvidia-device-plugin-daemonset-9g5j7" [4b83bdea-b49d-4190-94d1-648aa449cddf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 07:48:47.728843   14247 system_pods.go:89] "registry-6b586f9694-9xvr4" [15f4eef6-d42e-43fa-8958-437758150119] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 07:48:47.728849   14247 system_pods.go:89] "registry-creds-764b6fb674-4mf5m" [5f373a48-52c9-441e-a2db-28351bc83a48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 07:48:47.728854   14247 system_pods.go:89] "registry-proxy-xgtqv" [5365db61-16ee-452b-9ccc-eaf42f532ce7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 07:48:47.728858   14247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-klrbn" [f542d0aa-2574-4ee1-b4e7-f918488c019f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:47.728867   14247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qx7lp" [7e6af6b6-ad2b-4990-ab5b-aca4b8ac704e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:47.728873   14247 system_pods.go:89] "storage-provisioner" [e20648dc-41b5-404c-86ec-550b4b75c80a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 07:48:47.728887   14247 retry.go:31] will retry after 282.680512ms: missing components: kube-dns
	I1026 07:48:47.935380   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:48.017085   14247 system_pods.go:86] 20 kube-system pods found
	I1026 07:48:48.017126   14247 system_pods.go:89] "amd-gpu-device-plugin-79j4j" [3de0f744-f685-4002-a0fb-987b69a28eed] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 07:48:48.017153   14247 system_pods.go:89] "coredns-66bc5c9577-dqbbr" [a5360d6a-f7ac-49c8-a38b-de0cbc019ada] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 07:48:48.017164   14247 system_pods.go:89] "csi-hostpath-attacher-0" [427cd88d-7809-4d5c-b742-dc613723c8eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 07:48:48.017173   14247 system_pods.go:89] "csi-hostpath-resizer-0" [5632b492-535d-49fc-b4f4-780142412509] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 07:48:48.017181   14247 system_pods.go:89] "csi-hostpathplugin-nnl9n" [b19e7a2f-2826-4c12-9872-05c7b3daa41a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 07:48:48.017186   14247 system_pods.go:89] "etcd-addons-610291" [e218f6a9-b3e5-47a6-affc-0ced70bf0a2e] Running
	I1026 07:48:48.017192   14247 system_pods.go:89] "kindnet-b4jwg" [29fff50a-3d72-418d-8298-36d257dc9068] Running
	I1026 07:48:48.017198   14247 system_pods.go:89] "kube-apiserver-addons-610291" [9dcb8e97-6fe0-4cb1-9b62-d8193e9965f2] Running
	I1026 07:48:48.017203   14247 system_pods.go:89] "kube-controller-manager-addons-610291" [6e72e4d1-d1f5-45db-a473-17bee208af30] Running
	I1026 07:48:48.017212   14247 system_pods.go:89] "kube-ingress-dns-minikube" [16fe29e4-d3c1-404f-b1f5-d18bcec18f13] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 07:48:48.017217   14247 system_pods.go:89] "kube-proxy-mxqr8" [39564011-18e0-4076-9355-be6c38423d9e] Running
	I1026 07:48:48.017223   14247 system_pods.go:89] "kube-scheduler-addons-610291" [01bf8ae9-291c-4cd1-a1bd-c60d1e1b158e] Running
	I1026 07:48:48.017230   14247 system_pods.go:89] "metrics-server-85b7d694d7-fs7sf" [78b9ec71-29a1-4d28-979c-6a0735900428] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 07:48:48.017264   14247 system_pods.go:89] "nvidia-device-plugin-daemonset-9g5j7" [4b83bdea-b49d-4190-94d1-648aa449cddf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 07:48:48.017273   14247 system_pods.go:89] "registry-6b586f9694-9xvr4" [15f4eef6-d42e-43fa-8958-437758150119] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 07:48:48.017283   14247 system_pods.go:89] "registry-creds-764b6fb674-4mf5m" [5f373a48-52c9-441e-a2db-28351bc83a48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 07:48:48.017291   14247 system_pods.go:89] "registry-proxy-xgtqv" [5365db61-16ee-452b-9ccc-eaf42f532ce7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 07:48:48.017302   14247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-klrbn" [f542d0aa-2574-4ee1-b4e7-f918488c019f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:48.017311   14247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qx7lp" [7e6af6b6-ad2b-4990-ab5b-aca4b8ac704e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:48.017319   14247 system_pods.go:89] "storage-provisioner" [e20648dc-41b5-404c-86ec-550b4b75c80a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 07:48:48.017338   14247 retry.go:31] will retry after 344.079184ms: missing components: kube-dns
	I1026 07:48:48.181837   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:48.218573   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:48.218960   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:48.368866   14247 system_pods.go:86] 20 kube-system pods found
	I1026 07:48:48.368951   14247 system_pods.go:89] "amd-gpu-device-plugin-79j4j" [3de0f744-f685-4002-a0fb-987b69a28eed] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 07:48:48.368963   14247 system_pods.go:89] "coredns-66bc5c9577-dqbbr" [a5360d6a-f7ac-49c8-a38b-de0cbc019ada] Running
	I1026 07:48:48.368976   14247 system_pods.go:89] "csi-hostpath-attacher-0" [427cd88d-7809-4d5c-b742-dc613723c8eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 07:48:48.368985   14247 system_pods.go:89] "csi-hostpath-resizer-0" [5632b492-535d-49fc-b4f4-780142412509] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 07:48:48.368994   14247 system_pods.go:89] "csi-hostpathplugin-nnl9n" [b19e7a2f-2826-4c12-9872-05c7b3daa41a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 07:48:48.369000   14247 system_pods.go:89] "etcd-addons-610291" [e218f6a9-b3e5-47a6-affc-0ced70bf0a2e] Running
	I1026 07:48:48.369006   14247 system_pods.go:89] "kindnet-b4jwg" [29fff50a-3d72-418d-8298-36d257dc9068] Running
	I1026 07:48:48.369011   14247 system_pods.go:89] "kube-apiserver-addons-610291" [9dcb8e97-6fe0-4cb1-9b62-d8193e9965f2] Running
	I1026 07:48:48.369018   14247 system_pods.go:89] "kube-controller-manager-addons-610291" [6e72e4d1-d1f5-45db-a473-17bee208af30] Running
	I1026 07:48:48.369025   14247 system_pods.go:89] "kube-ingress-dns-minikube" [16fe29e4-d3c1-404f-b1f5-d18bcec18f13] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 07:48:48.369030   14247 system_pods.go:89] "kube-proxy-mxqr8" [39564011-18e0-4076-9355-be6c38423d9e] Running
	I1026 07:48:48.369035   14247 system_pods.go:89] "kube-scheduler-addons-610291" [01bf8ae9-291c-4cd1-a1bd-c60d1e1b158e] Running
	I1026 07:48:48.369042   14247 system_pods.go:89] "metrics-server-85b7d694d7-fs7sf" [78b9ec71-29a1-4d28-979c-6a0735900428] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 07:48:48.369049   14247 system_pods.go:89] "nvidia-device-plugin-daemonset-9g5j7" [4b83bdea-b49d-4190-94d1-648aa449cddf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 07:48:48.369059   14247 system_pods.go:89] "registry-6b586f9694-9xvr4" [15f4eef6-d42e-43fa-8958-437758150119] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 07:48:48.369089   14247 system_pods.go:89] "registry-creds-764b6fb674-4mf5m" [5f373a48-52c9-441e-a2db-28351bc83a48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 07:48:48.369097   14247 system_pods.go:89] "registry-proxy-xgtqv" [5365db61-16ee-452b-9ccc-eaf42f532ce7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 07:48:48.369107   14247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-klrbn" [f542d0aa-2574-4ee1-b4e7-f918488c019f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:48.369117   14247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qx7lp" [7e6af6b6-ad2b-4990-ab5b-aca4b8ac704e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:48.369122   14247 system_pods.go:89] "storage-provisioner" [e20648dc-41b5-404c-86ec-550b4b75c80a] Running
	I1026 07:48:48.369133   14247 system_pods.go:126] duration metric: took 737.315625ms to wait for k8s-apps to be running ...
	I1026 07:48:48.369142   14247 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 07:48:48.369193   14247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 07:48:48.434803   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:48.438474   14247 system_svc.go:56] duration metric: took 69.323041ms WaitForService to wait for kubelet
	I1026 07:48:48.438502   14247 kubeadm.go:586] duration metric: took 43.304744519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 07:48:48.438523   14247 node_conditions.go:102] verifying NodePressure condition ...
	I1026 07:48:48.441941   14247 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 07:48:48.441963   14247 node_conditions.go:123] node cpu capacity is 8
	I1026 07:48:48.441975   14247 node_conditions.go:105] duration metric: took 3.447249ms to run NodePressure ...
	I1026 07:48:48.441987   14247 start.go:241] waiting for startup goroutines ...
	I1026 07:48:48.680150   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:48.717677   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:48.718826   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:48.934171   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:49.180353   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:49.281425   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:49.281633   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:49.434431   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:49.681312   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:49.717224   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:49.717624   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:49.934542   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:50.181352   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:50.217045   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:50.217560   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:50.434590   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:50.679721   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:50.717562   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:50.717925   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:50.935003   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:51.180108   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:51.216913   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:51.218435   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:51.434975   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:51.696659   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:51.717078   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:51.717765   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:51.934287   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:52.180111   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:52.216388   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:52.217893   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:52.434079   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:52.680662   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:52.716784   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:52.718451   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:52.934291   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:53.180164   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:53.280590   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:53.280791   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:53.434361   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:53.680668   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:53.716903   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:53.718458   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:53.934410   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:54.180454   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:54.252091   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:54.252210   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:54.433854   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:54.679781   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:54.717445   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:54.717915   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:54.933508   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:55.179452   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:55.217061   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:55.217665   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:55.436614   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:55.681369   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:55.782381   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:55.782404   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:55.934539   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:56.179911   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:56.217050   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:56.218547   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:56.434641   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:56.679726   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:56.717700   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:56.717988   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:56.934486   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:57.180818   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:57.219366   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:57.219409   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:57.434028   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:57.680230   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:57.716874   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:57.718575   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:57.934389   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:58.180148   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:58.217073   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:58.218533   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:58.434334   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:58.680689   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:58.717500   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:58.718968   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:58.935215   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:59.180943   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:59.217215   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:59.218444   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:59.434315   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:59.680495   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:59.717557   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:59.717934   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:00.005486   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:00.180156   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:00.217027   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:00.218516   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:00.443827   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:00.679947   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:00.717556   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:00.718013   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:00.933885   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:01.179933   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:01.216396   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:01.218224   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:01.434291   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:01.680055   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:01.716672   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:01.717995   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:01.933996   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:02.180670   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:02.218283   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:02.218599   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:02.434662   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:02.679967   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:02.780017   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:02.780095   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:02.934595   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:03.180801   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:03.219032   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:03.219236   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:03.435835   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:03.679981   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:03.716982   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:03.718413   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:03.934563   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:04.179864   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:04.219151   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:04.219572   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:04.434615   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:04.679939   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:04.716565   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:04.717992   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:04.933441   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:05.180353   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:05.217461   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:05.218053   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:05.433601   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:05.680141   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:05.718412   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:05.718550   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:05.935686   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:06.179743   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:06.216830   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:06.218740   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:06.435129   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:06.682216   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:06.719597   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:06.720711   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:06.935576   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:07.180716   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:07.217672   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:07.217960   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:07.434412   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:07.680640   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:07.717524   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:07.717944   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:07.934494   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:08.179965   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:08.216371   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:08.218086   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:08.433993   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:08.680341   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:08.717078   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:08.717556   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:08.934158   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:09.180125   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:09.217516   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:09.218488   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:09.435006   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:09.680717   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:09.717786   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:09.718384   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:09.955612   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:10.180799   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:10.217824   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:10.218802   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:10.433707   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:10.461803   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:49:10.680615   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:10.717244   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:10.718368   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:10.934425   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:49:11.156908   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:11.156940   14247 retry.go:31] will retry after 44.795297433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:11.179847   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:11.217560   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:11.217796   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:11.435068   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:11.679958   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:11.717015   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:11.718039   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:11.934219   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:12.180724   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:12.221472   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:12.221498   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:12.434590   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:12.679622   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:12.717604   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:12.717916   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:12.934636   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:13.181019   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:13.217146   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:13.217763   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:13.434651   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:13.679439   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:13.716968   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:13.717597   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:13.934035   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:14.180161   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:14.217356   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:14.218551   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:14.435344   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:14.680472   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:14.717449   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:14.717915   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:14.933636   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:15.180369   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:15.217744   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:15.217781   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:15.434636   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:15.679945   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:15.717487   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:15.718195   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:15.934530   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:16.181377   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:16.219342   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:16.219475   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:16.435063   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:16.680592   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:16.720352   14247 kapi.go:107] duration metric: took 1m10.005110417s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 07:49:16.720416   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:16.934132   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:17.200488   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:17.216781   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:17.502121   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:17.680396   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:17.717506   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:17.934022   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:18.180320   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:18.216924   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:18.435133   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:18.680497   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:18.717073   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:18.934151   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:19.261536   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:19.261560   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:19.434014   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:19.680395   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:19.717799   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:19.935452   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:20.180698   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:20.217725   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:20.436444   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:20.680074   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:20.717747   14247 kapi.go:107] duration metric: took 1m14.004047934s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 07:49:20.934441   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:21.180808   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:21.434366   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:21.680495   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:21.934211   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:22.180724   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:22.434554   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:22.681048   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:22.933553   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:23.179307   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:23.433864   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:23.680179   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:23.933809   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:24.179928   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:24.434620   14247 kapi.go:107] duration metric: took 1m11.003636039s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 07:49:24.436293   14247 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-610291 cluster.
	I1026 07:49:24.437567   14247 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 07:49:24.438790   14247 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 07:49:24.681115   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:25.179645   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:25.680015   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:26.180606   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:26.712294   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:27.180095   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:27.679380   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:28.180159   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:28.679912   14247 kapi.go:107] duration metric: took 1m21.503309039s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 07:49:55.953323   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 07:49:56.486627   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 07:49:56.486718   14247 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 07:49:56.488759   14247 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1026 07:49:56.490189   14247 addons.go:514] duration metric: took 1m51.356512789s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass metrics-server registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1026 07:49:56.490224   14247 start.go:246] waiting for cluster config update ...
	I1026 07:49:56.490241   14247 start.go:255] writing updated cluster config ...
	I1026 07:49:56.490480   14247 ssh_runner.go:195] Run: rm -f paused
	I1026 07:49:56.494321   14247 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 07:49:56.497814   14247 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dqbbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.501582   14247 pod_ready.go:94] pod "coredns-66bc5c9577-dqbbr" is "Ready"
	I1026 07:49:56.501604   14247 pod_ready.go:86] duration metric: took 3.77084ms for pod "coredns-66bc5c9577-dqbbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.503423   14247 pod_ready.go:83] waiting for pod "etcd-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.506548   14247 pod_ready.go:94] pod "etcd-addons-610291" is "Ready"
	I1026 07:49:56.506567   14247 pod_ready.go:86] duration metric: took 3.126562ms for pod "etcd-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.508302   14247 pod_ready.go:83] waiting for pod "kube-apiserver-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.511529   14247 pod_ready.go:94] pod "kube-apiserver-addons-610291" is "Ready"
	I1026 07:49:56.511549   14247 pod_ready.go:86] duration metric: took 3.228239ms for pod "kube-apiserver-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.513102   14247 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.898347   14247 pod_ready.go:94] pod "kube-controller-manager-addons-610291" is "Ready"
	I1026 07:49:56.898371   14247 pod_ready.go:86] duration metric: took 385.251705ms for pod "kube-controller-manager-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:57.098537   14247 pod_ready.go:83] waiting for pod "kube-proxy-mxqr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:57.498360   14247 pod_ready.go:94] pod "kube-proxy-mxqr8" is "Ready"
	I1026 07:49:57.498386   14247 pod_ready.go:86] duration metric: took 399.825144ms for pod "kube-proxy-mxqr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:57.698339   14247 pod_ready.go:83] waiting for pod "kube-scheduler-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:58.097699   14247 pod_ready.go:94] pod "kube-scheduler-addons-610291" is "Ready"
	I1026 07:49:58.097724   14247 pod_ready.go:86] duration metric: took 399.362741ms for pod "kube-scheduler-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:58.097735   14247 pod_ready.go:40] duration metric: took 1.603386693s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 07:49:58.139679   14247 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 07:49:58.141742   14247 out.go:179] * Done! kubectl is now configured to use "addons-610291" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.806141571Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-z2x82/POD" id=1a35a73e-10b4-4cee-86b1-a7d4d1a7150c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.806232723Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.813432913Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-z2x82 Namespace:default ID:a97436140cfb791d0c2350570a9d97ed241ecdd7226062117f3093a21e74ce04 UID:d3c30ce7-8cef-4272-9253-75e2c6c89efb NetNS:/var/run/netns/9b04dbd4-2ee1-4004-b53a-efcc948fc2f3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000a8ad08}] Aliases:map[]}"
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.813467588Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-z2x82 to CNI network \"kindnet\" (type=ptp)"
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.823613765Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-z2x82 Namespace:default ID:a97436140cfb791d0c2350570a9d97ed241ecdd7226062117f3093a21e74ce04 UID:d3c30ce7-8cef-4272-9253-75e2c6c89efb NetNS:/var/run/netns/9b04dbd4-2ee1-4004-b53a-efcc948fc2f3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000a8ad08}] Aliases:map[]}"
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.823727778Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-z2x82 for CNI network kindnet (type=ptp)"
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.824602612Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.825381158Z" level=info msg="Ran pod sandbox a97436140cfb791d0c2350570a9d97ed241ecdd7226062117f3093a21e74ce04 with infra container: default/hello-world-app-5d498dc89-z2x82/POD" id=1a35a73e-10b4-4cee-86b1-a7d4d1a7150c name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.826533328Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=55999226-c754-470b-b97d-2235e96a6575 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.826653329Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=55999226-c754-470b-b97d-2235e96a6575 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.826686724Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=55999226-c754-470b-b97d-2235e96a6575 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.827289604Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=c3e85e42-44f1-4eaa-bb2e-203997bcd24a name=/runtime.v1.ImageService/PullImage
	Oct 26 07:52:44 addons-610291 crio[781]: time="2025-10-26T07:52:44.848666645Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Oct 26 07:52:45 addons-610291 crio[781]: time="2025-10-26T07:52:45.188623306Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=c3e85e42-44f1-4eaa-bb2e-203997bcd24a name=/runtime.v1.ImageService/PullImage
	Oct 26 07:52:45 addons-610291 crio[781]: time="2025-10-26T07:52:45.18922279Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=a0a84dd6-99d1-48e6-bce0-4503ff377a24 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 07:52:45 addons-610291 crio[781]: time="2025-10-26T07:52:45.190670065Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=811ab600-13ca-4a05-96b2-ce6943554db9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 07:52:45 addons-610291 crio[781]: time="2025-10-26T07:52:45.194713749Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-z2x82/hello-world-app" id=bdceb7c4-9d2b-402c-86e3-80e17d7fbdc9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 07:52:45 addons-610291 crio[781]: time="2025-10-26T07:52:45.194824046Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 07:52:45 addons-610291 crio[781]: time="2025-10-26T07:52:45.200069361Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 07:52:45 addons-610291 crio[781]: time="2025-10-26T07:52:45.200215979Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/3034c64a24ad7d784dddfdd61cd44cf9afade12427d47733ef83fd230c27e885/merged/etc/passwd: no such file or directory"
	Oct 26 07:52:45 addons-610291 crio[781]: time="2025-10-26T07:52:45.200240632Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/3034c64a24ad7d784dddfdd61cd44cf9afade12427d47733ef83fd230c27e885/merged/etc/group: no such file or directory"
	Oct 26 07:52:45 addons-610291 crio[781]: time="2025-10-26T07:52:45.200478546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 07:52:45 addons-610291 crio[781]: time="2025-10-26T07:52:45.230368957Z" level=info msg="Created container 5ee96a775afa4f88b39b4b719b5125296616b24ec5a999f87fe8eadf248865b7: default/hello-world-app-5d498dc89-z2x82/hello-world-app" id=bdceb7c4-9d2b-402c-86e3-80e17d7fbdc9 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 07:52:45 addons-610291 crio[781]: time="2025-10-26T07:52:45.230946154Z" level=info msg="Starting container: 5ee96a775afa4f88b39b4b719b5125296616b24ec5a999f87fe8eadf248865b7" id=4730e950-e359-48c9-b5f9-0798a0fa7e06 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 07:52:45 addons-610291 crio[781]: time="2025-10-26T07:52:45.232991647Z" level=info msg="Started container" PID=9967 containerID=5ee96a775afa4f88b39b4b719b5125296616b24ec5a999f87fe8eadf248865b7 description=default/hello-world-app-5d498dc89-z2x82/hello-world-app id=4730e950-e359-48c9-b5f9-0798a0fa7e06 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a97436140cfb791d0c2350570a9d97ed241ecdd7226062117f3093a21e74ce04
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	5ee96a775afa4       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   a97436140cfb7       hello-world-app-5d498dc89-z2x82             default
	de5a853aac554       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   29d849e3f7ee0       registry-creds-764b6fb674-4mf5m             kube-system
	a08678d07c59e       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                                              2 minutes ago            Running             nginx                                    0                   b7420b0f0c85d       nginx                                       default
	bfc678b107c17       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   81279115937c0       busybox                                     default
	ff1e9c088f2c4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          3 minutes ago            Running             csi-snapshotter                          0                   bc3ba90d25ef2       csi-hostpathplugin-nnl9n                    kube-system
	ae9bf07bc6fbd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          3 minutes ago            Running             csi-provisioner                          0                   bc3ba90d25ef2       csi-hostpathplugin-nnl9n                    kube-system
	790b75b33837b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            3 minutes ago            Running             liveness-probe                           0                   bc3ba90d25ef2       csi-hostpathplugin-nnl9n                    kube-system
	6fbcc38580336       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           3 minutes ago            Running             hostpath                                 0                   bc3ba90d25ef2       csi-hostpathplugin-nnl9n                    kube-system
	9abeb3423d9e6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 3 minutes ago            Running             gcp-auth                                 0                   22dc5993fe473       gcp-auth-78565c9fb4-n2jng                   gcp-auth
	b52991ad016eb       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago            Running             node-driver-registrar                    0                   bc3ba90d25ef2       csi-hostpathplugin-nnl9n                    kube-system
	b248cc878fb62       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            3 minutes ago            Running             gadget                                   0                   238142f6a4e0b       gadget-qvptl                                gadget
	85c0a4904ad2b       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             3 minutes ago            Running             controller                               0                   88a64fef92686       ingress-nginx-controller-675c5ddd98-s4j4n   ingress-nginx
	d22cb2ff56e59       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   9c72000c06e64       registry-proxy-xgtqv                        kube-system
	9787a3b2f3c6f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   bc3ba90d25ef2       csi-hostpathplugin-nnl9n                    kube-system
	48d279f0283b0       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   d40e313f87cfa       nvidia-device-plugin-daemonset-9g5j7        kube-system
	4bec0c61b08ba       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   91b694ebbaf6f       csi-hostpath-resizer-0                      kube-system
	a5104f945769b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   51e59d87fe1db       snapshot-controller-7d9fbc56b8-klrbn        kube-system
	d0946ef457f29       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   7d9a488fa4735       amd-gpu-device-plugin-79j4j                 kube-system
	d78d517513eb7       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   83565fd2ba0e9       csi-hostpath-attacher-0                     kube-system
	9606d1f8109db       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   1c97ac738dbda       snapshot-controller-7d9fbc56b8-qx7lp        kube-system
	755f6bc76edf9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              patch                                    0                   0fd296dfb7e36       ingress-nginx-admission-patch-d6z8s         ingress-nginx
	4f4894d9a3135       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   b6220cb94f3a8       yakd-dashboard-5ff678cb9-9mp2x              yakd-dashboard
	fbe791bbcaa55       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   8cca98cb5edc5       local-path-provisioner-648f6765c9-kr5wg     local-path-storage
	bec0f3b559fbe       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              create                                   0                   b3c1ef5df582d       ingress-nginx-admission-create-pdppz        ingress-nginx
	060d78eaf3f37       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               3 minutes ago            Running             cloud-spanner-emulator                   0                   a134960725cfa       cloud-spanner-emulator-86bd5cbb97-h4cbz     default
	d0e6d85b2ec86       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   c8c833aee7f0f       registry-6b586f9694-9xvr4                   kube-system
	604c3b50083ea       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   51fda9c9e446b       metrics-server-85b7d694d7-fs7sf             kube-system
	2749efc7ce147       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   48d415ccfa3b4       kube-ingress-dns-minikube                   kube-system
	0d587f45003f4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   9fc8185235027       coredns-66bc5c9577-dqbbr                    kube-system
	3504b65df25d5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   9e45a9664cdd4       storage-provisioner                         kube-system
	e6e97259a969c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   ed953d165bbf5       kube-proxy-mxqr8                            kube-system
	4c0deee84eddb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   ba1bd3062a478       kindnet-b4jwg                               kube-system
	aa644f5a3e4c4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   6c0e5aac756c9       etcd-addons-610291                          kube-system
	2190af960ec64       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   29475dcb88eb3       kube-apiserver-addons-610291                kube-system
	f9726db7b5e96       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   ae8c57e37d754       kube-scheduler-addons-610291                kube-system
	a92d6c36860a8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   5d0a6499d9ba8       kube-controller-manager-addons-610291       kube-system
	
	
	==> coredns [0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5] <==
	[INFO] 10.244.0.22:49558 - 47935 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007688116s
	[INFO] 10.244.0.22:50802 - 5373 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004682366s
	[INFO] 10.244.0.22:48157 - 16433 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006144026s
	[INFO] 10.244.0.22:57088 - 56177 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004272234s
	[INFO] 10.244.0.22:60685 - 26338 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00524819s
	[INFO] 10.244.0.22:58451 - 48109 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001022634s
	[INFO] 10.244.0.22:33747 - 59035 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002167279s
	[INFO] 10.244.0.24:38090 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000216528s
	[INFO] 10.244.0.24:41535 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000172018s
	[INFO] 10.244.0.31:35423 - 48035 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000212724s
	[INFO] 10.244.0.31:53175 - 57026 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000244188s
	[INFO] 10.244.0.31:40053 - 24005 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000170608s
	[INFO] 10.244.0.31:52363 - 3187 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000129814s
	[INFO] 10.244.0.31:45664 - 55119 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000112079s
	[INFO] 10.244.0.31:39686 - 58331 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000149286s
	[INFO] 10.244.0.31:58730 - 14833 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004104773s
	[INFO] 10.244.0.31:49925 - 26912 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.005169679s
	[INFO] 10.244.0.31:47228 - 37465 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.005752354s
	[INFO] 10.244.0.31:51618 - 21332 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.005907016s
	[INFO] 10.244.0.31:50944 - 47138 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.004425301s
	[INFO] 10.244.0.31:60252 - 10682 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005866765s
	[INFO] 10.244.0.31:53855 - 4130 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.00416433s
	[INFO] 10.244.0.31:58721 - 14978 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005247122s
	[INFO] 10.244.0.31:38688 - 35247 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001680472s
	[INFO] 10.244.0.31:44491 - 27256 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001719178s
	
	
	==> describe nodes <==
	Name:               addons-610291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-610291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=addons-610291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T07_48_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-610291
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-610291"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 07:47:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-610291
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 07:52:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 07:52:35 +0000   Sun, 26 Oct 2025 07:47:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 07:52:35 +0000   Sun, 26 Oct 2025 07:47:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 07:52:35 +0000   Sun, 26 Oct 2025 07:47:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 07:52:35 +0000   Sun, 26 Oct 2025 07:48:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-610291
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                4788153a-655f-4b2c-a534-38625b1e2dd6
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     cloud-spanner-emulator-86bd5cbb97-h4cbz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  default                     hello-world-app-5d498dc89-z2x82              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m25s
	  gadget                      gadget-qvptl                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  gcp-auth                    gcp-auth-78565c9fb4-n2jng                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m33s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-s4j4n    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m40s
	  kube-system                 amd-gpu-device-plugin-79j4j                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 coredns-66bc5c9577-dqbbr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m41s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 csi-hostpathplugin-nnl9n                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 etcd-addons-610291                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m46s
	  kube-system                 kindnet-b4jwg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m41s
	  kube-system                 kube-apiserver-addons-610291                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-controller-manager-addons-610291        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 kube-proxy-mxqr8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-scheduler-addons-610291                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m46s
	  kube-system                 metrics-server-85b7d694d7-fs7sf              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m40s
	  kube-system                 nvidia-device-plugin-daemonset-9g5j7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 registry-6b586f9694-9xvr4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 registry-creds-764b6fb674-4mf5m              0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 registry-proxy-xgtqv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 snapshot-controller-7d9fbc56b8-klrbn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 snapshot-controller-7d9fbc56b8-qx7lp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m39s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  local-path-storage          local-path-provisioner-648f6765c9-kr5wg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9mp2x               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m39s  kube-proxy       
	  Normal  Starting                 4m47s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m46s  kubelet          Node addons-610291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m46s  kubelet          Node addons-610291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m46s  kubelet          Node addons-610291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m42s  node-controller  Node addons-610291 event: Registered Node addons-610291 in Controller
	  Normal  NodeReady                3m59s  kubelet          Node addons-610291 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce] <==
	{"level":"warn","ts":"2025-10-26T07:47:57.228608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.234507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.241182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.247018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.252954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.259650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.266539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.273116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.279298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.285854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.292421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.311411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.319052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.326022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.379093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:48:07.582990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:48:07.594310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:48:34.782616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:48:34.799216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:48:34.811183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:48:34.817366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:49:06.143183Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.971053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T07:49:06.143329Z","caller":"traceutil/trace.go:172","msg":"trace[160218583] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1083; }","duration":"115.131212ms","start":"2025-10-26T07:49:06.028180Z","end":"2025-10-26T07:49:06.143312Z","steps":["trace[160218583] 'range keys from in-memory index tree'  (duration: 114.906398ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T07:49:17.198809Z","caller":"traceutil/trace.go:172","msg":"trace[881903939] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"114.711873ms","start":"2025-10-26T07:49:17.084074Z","end":"2025-10-26T07:49:17.198786Z","steps":["trace[881903939] 'process raft request'  (duration: 70.579972ms)","trace[881903939] 'compare'  (duration: 43.982834ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T07:49:32.452853Z","caller":"traceutil/trace.go:172","msg":"trace[2012589528] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"134.42988ms","start":"2025-10-26T07:49:32.318409Z","end":"2025-10-26T07:49:32.452839Z","steps":["trace[2012589528] 'process raft request'  (duration: 134.341572ms)"],"step_count":1}
	
	
	==> gcp-auth [9abeb3423d9e6c097de96ad32dc682ea966fb5047da015c6c3fbfa7e44fd8c46] <==
	2025/10/26 07:49:24 GCP Auth Webhook started!
	2025/10/26 07:49:58 Ready to marshal response ...
	2025/10/26 07:49:58 Ready to write response ...
	2025/10/26 07:49:58 Ready to marshal response ...
	2025/10/26 07:49:58 Ready to write response ...
	2025/10/26 07:49:58 Ready to marshal response ...
	2025/10/26 07:49:58 Ready to write response ...
	2025/10/26 07:50:17 Ready to marshal response ...
	2025/10/26 07:50:17 Ready to write response ...
	2025/10/26 07:50:18 Ready to marshal response ...
	2025/10/26 07:50:18 Ready to write response ...
	2025/10/26 07:50:19 Ready to marshal response ...
	2025/10/26 07:50:19 Ready to write response ...
	2025/10/26 07:50:19 Ready to marshal response ...
	2025/10/26 07:50:19 Ready to write response ...
	2025/10/26 07:50:21 Ready to marshal response ...
	2025/10/26 07:50:21 Ready to write response ...
	2025/10/26 07:50:28 Ready to marshal response ...
	2025/10/26 07:50:28 Ready to write response ...
	2025/10/26 07:50:49 Ready to marshal response ...
	2025/10/26 07:50:49 Ready to write response ...
	2025/10/26 07:52:44 Ready to marshal response ...
	2025/10/26 07:52:44 Ready to write response ...
	
	
	==> kernel <==
	 07:52:46 up 35 min,  0 user,  load average: 0.32, 0.80, 0.47
	Linux addons-610291 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c] <==
	I1026 07:50:36.750650       1 main.go:301] handling current node
	I1026 07:50:46.750351       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:50:46.750381       1 main.go:301] handling current node
	I1026 07:50:56.748938       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:50:56.748966       1 main.go:301] handling current node
	I1026 07:51:06.749971       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:51:06.750047       1 main.go:301] handling current node
	I1026 07:51:16.749760       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:51:16.749804       1 main.go:301] handling current node
	I1026 07:51:26.754938       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:51:26.754972       1 main.go:301] handling current node
	I1026 07:51:36.748380       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:51:36.748412       1 main.go:301] handling current node
	I1026 07:51:46.753652       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:51:46.753683       1 main.go:301] handling current node
	I1026 07:51:56.748296       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:51:56.748329       1 main.go:301] handling current node
	I1026 07:52:06.748858       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:52:06.748887       1 main.go:301] handling current node
	I1026 07:52:16.749351       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:52:16.749405       1 main.go:301] handling current node
	I1026 07:52:26.749358       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:52:26.749413       1 main.go:301] handling current node
	I1026 07:52:36.748990       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:52:36.749046       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a] <==
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1026 07:48:54.261268       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.267134       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.288414       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.329957       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.410890       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.572190       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.892696       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	W1026 07:48:55.263142       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 07:48:55.263194       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 07:48:55.263206       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 07:48:55.263147       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 07:48:55.263322       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 07:48:55.264475       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 07:48:55.568456       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 07:50:06.806295       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55486: use of closed network connection
	E1026 07:50:06.954201       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55512: use of closed network connection
	I1026 07:50:21.352718       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1026 07:50:21.552873       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.78.121"}
	I1026 07:50:28.581373       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1026 07:52:44.571592       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.96.34.94"}
	
	
	==> kube-controller-manager [a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83] <==
	I1026 07:48:04.760424       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 07:48:04.760448       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 07:48:04.760453       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 07:48:04.760479       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 07:48:04.760541       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 07:48:04.761971       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 07:48:04.762050       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 07:48:04.762731       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 07:48:04.762795       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 07:48:04.762848       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 07:48:04.762855       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 07:48:04.762863       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 07:48:04.768901       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-610291" podCIDRs=["10.244.0.0/24"]
	I1026 07:48:04.769983       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 07:48:04.780540       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 07:48:04.781761       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1026 07:48:06.492228       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1026 07:48:34.774498       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 07:48:34.774637       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1026 07:48:34.774672       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1026 07:48:34.794041       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1026 07:48:34.798726       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1026 07:48:34.875021       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 07:48:34.899728       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 07:48:49.715463       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5] <==
	I1026 07:48:06.414479       1 server_linux.go:53] "Using iptables proxy"
	I1026 07:48:06.505064       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 07:48:06.606278       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 07:48:06.613829       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 07:48:06.613924       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 07:48:06.642660       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 07:48:06.642780       1 server_linux.go:132] "Using iptables Proxier"
	I1026 07:48:06.649323       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 07:48:06.654708       1 server.go:527] "Version info" version="v1.34.1"
	I1026 07:48:06.654738       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 07:48:06.656669       1 config.go:309] "Starting node config controller"
	I1026 07:48:06.656688       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 07:48:06.656702       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 07:48:06.656764       1 config.go:200] "Starting service config controller"
	I1026 07:48:06.656783       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 07:48:06.656801       1 config.go:106] "Starting endpoint slice config controller"
	I1026 07:48:06.656806       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 07:48:06.656820       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 07:48:06.656825       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 07:48:06.756884       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 07:48:06.756884       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 07:48:06.756999       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3] <==
	E1026 07:47:57.773317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 07:47:57.773415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 07:47:57.774003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 07:47:57.774089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 07:47:57.773992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 07:47:57.774309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 07:47:57.774315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 07:47:57.774362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 07:47:57.774363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 07:47:57.774377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 07:47:57.774377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 07:47:57.774512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 07:47:57.774816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 07:47:57.774838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 07:47:58.631114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 07:47:58.631117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 07:47:58.678854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 07:47:58.717442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 07:47:58.762486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 07:47:58.771056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 07:47:58.971665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 07:47:58.975770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 07:47:58.999838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 07:47:59.008693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1026 07:48:01.469527       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 07:50:56 addons-610291 kubelet[1304]: I1026 07:50:56.453122    1304 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^7cc64c1e-b240-11f0-9922-7af349326fd2" (OuterVolumeSpecName: "task-pv-storage") pod "500eb935-e9da-4c2a-8395-36e7f80e3dc1" (UID: "500eb935-e9da-4c2a-8395-36e7f80e3dc1"). InnerVolumeSpecName "pvc-86e942c1-cd7a-43e5-8a70-a4d33e005a99". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Oct 26 07:50:56 addons-610291 kubelet[1304]: I1026 07:50:56.549921    1304 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-86e942c1-cd7a-43e5-8a70-a4d33e005a99\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7cc64c1e-b240-11f0-9922-7af349326fd2\") on node \"addons-610291\" "
	Oct 26 07:50:56 addons-610291 kubelet[1304]: I1026 07:50:56.549958    1304 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bwsx7\" (UniqueName: \"kubernetes.io/projected/500eb935-e9da-4c2a-8395-36e7f80e3dc1-kube-api-access-bwsx7\") on node \"addons-610291\" DevicePath \"\""
	Oct 26 07:50:56 addons-610291 kubelet[1304]: I1026 07:50:56.557110    1304 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-86e942c1-cd7a-43e5-8a70-a4d33e005a99" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^7cc64c1e-b240-11f0-9922-7af349326fd2") on node "addons-610291"
	Oct 26 07:50:56 addons-610291 kubelet[1304]: I1026 07:50:56.650499    1304 reconciler_common.go:299] "Volume detached for volume \"pvc-86e942c1-cd7a-43e5-8a70-a4d33e005a99\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^7cc64c1e-b240-11f0-9922-7af349326fd2\") on node \"addons-610291\" DevicePath \"\""
	Oct 26 07:50:56 addons-610291 kubelet[1304]: I1026 07:50:56.703710    1304 scope.go:117] "RemoveContainer" containerID="3d042a7258f25e46f669938017ee39e5a494f60cb533a85d39b0459bce9ece2e"
	Oct 26 07:50:56 addons-610291 kubelet[1304]: I1026 07:50:56.712030    1304 scope.go:117] "RemoveContainer" containerID="3d042a7258f25e46f669938017ee39e5a494f60cb533a85d39b0459bce9ece2e"
	Oct 26 07:50:56 addons-610291 kubelet[1304]: E1026 07:50:56.712422    1304 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d042a7258f25e46f669938017ee39e5a494f60cb533a85d39b0459bce9ece2e\": container with ID starting with 3d042a7258f25e46f669938017ee39e5a494f60cb533a85d39b0459bce9ece2e not found: ID does not exist" containerID="3d042a7258f25e46f669938017ee39e5a494f60cb533a85d39b0459bce9ece2e"
	Oct 26 07:50:56 addons-610291 kubelet[1304]: I1026 07:50:56.712458    1304 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d042a7258f25e46f669938017ee39e5a494f60cb533a85d39b0459bce9ece2e"} err="failed to get container status \"3d042a7258f25e46f669938017ee39e5a494f60cb533a85d39b0459bce9ece2e\": rpc error: code = NotFound desc = could not find container \"3d042a7258f25e46f669938017ee39e5a494f60cb533a85d39b0459bce9ece2e\": container with ID starting with 3d042a7258f25e46f669938017ee39e5a494f60cb533a85d39b0459bce9ece2e not found: ID does not exist"
	Oct 26 07:50:57 addons-610291 kubelet[1304]: I1026 07:50:57.999421    1304 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="500eb935-e9da-4c2a-8395-36e7f80e3dc1" path="/var/lib/kubelet/pods/500eb935-e9da-4c2a-8395-36e7f80e3dc1/volumes"
	Oct 26 07:51:00 addons-610291 kubelet[1304]: I1026 07:51:00.018631    1304 scope.go:117] "RemoveContainer" containerID="7a901d1f57060c42035d45da4d119020f56f8f6543a1696e82efc77be816f92a"
	Oct 26 07:51:00 addons-610291 kubelet[1304]: I1026 07:51:00.027635    1304 scope.go:117] "RemoveContainer" containerID="425feccc7653ad0e5259cd2fd07858d89fd013d33fc705e033cfb2ff522a02cb"
	Oct 26 07:51:00 addons-610291 kubelet[1304]: I1026 07:51:00.035423    1304 scope.go:117] "RemoveContainer" containerID="c55be05303d9187f4f533e3d8140b0744a9224afa3ec8b3dc4bfaa4410897f11"
	Oct 26 07:51:00 addons-610291 kubelet[1304]: I1026 07:51:00.044483    1304 scope.go:117] "RemoveContainer" containerID="c6a1f4128526d298dd6aa84f9fbef1bc047595ac7f5ac7592536411dcdb04a3b"
	Oct 26 07:51:03 addons-610291 kubelet[1304]: I1026 07:51:03.746723    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-4mf5m" podStartSLOduration=176.961699917 podStartE2EDuration="2m57.746702125s" podCreationTimestamp="2025-10-26 07:48:06 +0000 UTC" firstStartedPulling="2025-10-26 07:51:02.017936198 +0000 UTC m=+182.102283901" lastFinishedPulling="2025-10-26 07:51:02.802938402 +0000 UTC m=+182.887286109" observedRunningTime="2025-10-26 07:51:03.745167807 +0000 UTC m=+183.829515536" watchObservedRunningTime="2025-10-26 07:51:03.746702125 +0000 UTC m=+183.831049849"
	Oct 26 07:51:22 addons-610291 kubelet[1304]: I1026 07:51:22.996579    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-9g5j7" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:51:23 addons-610291 kubelet[1304]: I1026 07:51:23.997105    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-dqbbr" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:51:28 addons-610291 kubelet[1304]: I1026 07:51:28.997435    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-79j4j" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:51:40 addons-610291 kubelet[1304]: I1026 07:51:40.997126    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xgtqv" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:52:37 addons-610291 kubelet[1304]: I1026 07:52:37.996784    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-9g5j7" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:52:37 addons-610291 kubelet[1304]: I1026 07:52:37.996992    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/coredns-66bc5c9577-dqbbr" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:52:37 addons-610291 kubelet[1304]: I1026 07:52:37.997205    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-79j4j" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:52:42 addons-610291 kubelet[1304]: I1026 07:52:42.996729    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xgtqv" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:52:44 addons-610291 kubelet[1304]: I1026 07:52:44.583579    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/d3c30ce7-8cef-4272-9253-75e2c6c89efb-gcp-creds\") pod \"hello-world-app-5d498dc89-z2x82\" (UID: \"d3c30ce7-8cef-4272-9253-75e2c6c89efb\") " pod="default/hello-world-app-5d498dc89-z2x82"
	Oct 26 07:52:44 addons-610291 kubelet[1304]: I1026 07:52:44.583619    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v889t\" (UniqueName: \"kubernetes.io/projected/d3c30ce7-8cef-4272-9253-75e2c6c89efb-kube-api-access-v889t\") pod \"hello-world-app-5d498dc89-z2x82\" (UID: \"d3c30ce7-8cef-4272-9253-75e2c6c89efb\") " pod="default/hello-world-app-5d498dc89-z2x82"
	
	
	==> storage-provisioner [3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464] <==
	W1026 07:52:20.727623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:22.730513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:22.733960       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:24.736978       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:24.740818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:26.744469       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:26.749317       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:28.752759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:28.756459       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:30.759161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:30.763519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:32.766654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:32.771354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:34.774347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:34.777875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:36.780231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:36.783563       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:38.787063       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:38.790890       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:40.793406       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:40.798910       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:42.801944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:42.807240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:44.810011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:52:44.815640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-610291 -n addons-610291
helpers_test.go:269: (dbg) Run:  kubectl --context addons-610291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-pdppz ingress-nginx-admission-patch-d6z8s
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-610291 describe pod ingress-nginx-admission-create-pdppz ingress-nginx-admission-patch-d6z8s
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-610291 describe pod ingress-nginx-admission-create-pdppz ingress-nginx-admission-patch-d6z8s: exit status 1 (57.043357ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pdppz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d6z8s" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-610291 describe pod ingress-nginx-admission-create-pdppz ingress-nginx-admission-patch-d6z8s: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (241.270993ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:52:47.147415   29211 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:52:47.147726   29211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:52:47.147737   29211 out.go:374] Setting ErrFile to fd 2...
	I1026 07:52:47.147742   29211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:52:47.147928   29211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:52:47.148211   29211 mustload.go:65] Loading cluster: addons-610291
	I1026 07:52:47.148571   29211 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:52:47.148591   29211 addons.go:606] checking whether the cluster is paused
	I1026 07:52:47.148695   29211 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:52:47.148717   29211 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:52:47.149083   29211 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:52:47.166719   29211 ssh_runner.go:195] Run: systemctl --version
	I1026 07:52:47.166827   29211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:52:47.184764   29211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:52:47.282731   29211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:52:47.282816   29211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:52:47.311662   29211 cri.go:89] found id: "de5a853aac55444ee9567f100edd5de8a2962e8dfbd61b6fe6d4191bb042ef1d"
	I1026 07:52:47.311686   29211 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:52:47.311690   29211 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:52:47.311693   29211 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:52:47.311696   29211 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:52:47.311699   29211 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:52:47.311701   29211 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:52:47.311714   29211 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:52:47.311719   29211 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:52:47.311725   29211 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:52:47.311734   29211 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:52:47.311739   29211 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:52:47.311746   29211 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:52:47.311751   29211 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:52:47.311758   29211 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:52:47.311764   29211 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:52:47.311772   29211 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:52:47.311777   29211 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:52:47.311780   29211 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:52:47.311784   29211 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:52:47.311786   29211 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:52:47.311789   29211 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:52:47.311792   29211 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:52:47.311794   29211 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:52:47.311801   29211 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:52:47.311805   29211 cri.go:89] found id: ""
	I1026 07:52:47.311850   29211 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:52:47.325775   29211 out.go:203] 
	W1026 07:52:47.326936   29211 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:52:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:52:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:52:47.326955   29211 out.go:285] * 
	* 
	W1026 07:52:47.330010   29211 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:52:47.331413   29211 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable ingress --alsologtostderr -v=1: exit status 11 (241.737347ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:52:47.389891   29272 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:52:47.390186   29272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:52:47.390195   29272 out.go:374] Setting ErrFile to fd 2...
	I1026 07:52:47.390211   29272 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:52:47.390424   29272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:52:47.390670   29272 mustload.go:65] Loading cluster: addons-610291
	I1026 07:52:47.391007   29272 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:52:47.391021   29272 addons.go:606] checking whether the cluster is paused
	I1026 07:52:47.391108   29272 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:52:47.391123   29272 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:52:47.391523   29272 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:52:47.409417   29272 ssh_runner.go:195] Run: systemctl --version
	I1026 07:52:47.409475   29272 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:52:47.426634   29272 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:52:47.525319   29272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:52:47.525443   29272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:52:47.554213   29272 cri.go:89] found id: "de5a853aac55444ee9567f100edd5de8a2962e8dfbd61b6fe6d4191bb042ef1d"
	I1026 07:52:47.554237   29272 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:52:47.554243   29272 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:52:47.554263   29272 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:52:47.554267   29272 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:52:47.554272   29272 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:52:47.554276   29272 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:52:47.554280   29272 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:52:47.554284   29272 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:52:47.554302   29272 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:52:47.554310   29272 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:52:47.554315   29272 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:52:47.554323   29272 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:52:47.554327   29272 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:52:47.554333   29272 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:52:47.554346   29272 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:52:47.554354   29272 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:52:47.554357   29272 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:52:47.554360   29272 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:52:47.554362   29272 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:52:47.554365   29272 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:52:47.554367   29272 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:52:47.554370   29272 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:52:47.554372   29272 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:52:47.554375   29272 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:52:47.554377   29272 cri.go:89] found id: ""
	I1026 07:52:47.554415   29272 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:52:47.568082   29272 out.go:203] 
	W1026 07:52:47.569215   29272 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:52:47Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:52:47Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:52:47.569235   29272 out.go:285] * 
	* 
	W1026 07:52:47.572215   29272 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:52:47.573354   29272 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (146.49s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-qvptl" [a1cb53b2-fd09-4568-a122-5cf4cd373085] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003133947s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (302.27062ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:30.662021   26173 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:30.662304   26173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:30.662317   26173 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:30.662324   26173 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:30.662562   26173 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:30.662868   26173 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:30.663260   26173 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:30.663279   26173 addons.go:606] checking whether the cluster is paused
	I1026 07:50:30.663386   26173 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:30.663407   26173 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:30.663798   26173 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:30.687384   26173 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:30.687454   26173 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:30.717632   26173 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:30.822849   26173 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:30.822978   26173 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:30.856435   26173 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:30.856463   26173 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:30.856470   26173 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:30.856477   26173 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:30.856481   26173 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:30.856486   26173 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:30.856489   26173 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:30.856493   26173 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:30.856497   26173 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:30.856518   26173 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:30.856522   26173 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:30.856526   26173 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:30.856530   26173 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:30.856533   26173 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:30.856537   26173 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:30.856545   26173 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:30.856549   26173 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:30.856555   26173 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:30.856559   26173 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:30.856563   26173 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:30.856566   26173 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:30.856570   26173 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:30.856574   26173 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:30.856578   26173 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:30.856582   26173 cri.go:89] found id: ""
	I1026 07:50:30.856638   26173 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:30.874884   26173 out.go:203] 
	W1026 07:50:30.876173   26173 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:30Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:30Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:30.876198   26173 out.go:285] * 
	* 
	W1026 07:50:30.880928   26173 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:30.882513   26173 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.31s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.33s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.466711ms
I1026 07:50:07.211765   12921 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1026 07:50:07.211789   12921 kapi.go:107] duration metric: took 3.628722ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-fs7sf" [78b9ec71-29a1-4d28-979c-6a0735900428] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003939411s
addons_test.go:463: (dbg) Run:  kubectl --context addons-610291 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (252.137403ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:12.336728   23825 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:12.337219   23825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:12.337232   23825 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:12.337235   23825 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:12.337783   23825 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:12.338121   23825 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:12.338521   23825 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:12.338538   23825 addons.go:606] checking whether the cluster is paused
	I1026 07:50:12.338622   23825 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:12.338633   23825 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:12.338960   23825 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:12.358217   23825 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:12.358305   23825 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:12.376607   23825 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:12.477870   23825 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:12.477980   23825 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:12.510501   23825 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:12.510522   23825 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:12.510526   23825 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:12.510529   23825 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:12.510533   23825 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:12.510536   23825 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:12.510539   23825 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:12.510541   23825 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:12.510543   23825 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:12.510548   23825 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:12.510550   23825 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:12.510553   23825 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:12.510555   23825 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:12.510557   23825 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:12.510560   23825 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:12.510564   23825 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:12.510566   23825 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:12.510569   23825 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:12.510571   23825 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:12.510574   23825 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:12.510576   23825 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:12.510578   23825 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:12.510581   23825 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:12.510583   23825 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:12.510588   23825 cri.go:89] found id: ""
	I1026 07:50:12.510656   23825 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:12.525082   23825 out.go:203] 
	W1026 07:50:12.526452   23825 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:12.526487   23825 out.go:285] * 
	* 
	W1026 07:50:12.529732   23825 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:12.531064   23825 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.33s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.32s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1026 07:50:07.208164   12921 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.640199ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-610291 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-610291 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [90d52b70-b389-429b-8270-c1cdc81dc5b3] Pending
helpers_test.go:352: "task-pv-pod" [90d52b70-b389-429b-8270-c1cdc81dc5b3] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [90d52b70-b389-429b-8270-c1cdc81dc5b3] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.00331439s
addons_test.go:572: (dbg) Run:  kubectl --context addons-610291 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-610291 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-610291 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-610291 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-610291 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-610291 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-610291 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [500eb935-e9da-4c2a-8395-36e7f80e3dc1] Pending
helpers_test.go:352: "task-pv-pod-restore" [500eb935-e9da-4c2a-8395-36e7f80e3dc1] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [500eb935-e9da-4c2a-8395-36e7f80e3dc1] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003475129s
addons_test.go:614: (dbg) Run:  kubectl --context addons-610291 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-610291 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-610291 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (243.838188ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:57.094847   26934 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:57.095131   26934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:57.095142   26934 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:57.095146   26934 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:57.095368   26934 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:57.095626   26934 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:57.095972   26934 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:57.095986   26934 addons.go:606] checking whether the cluster is paused
	I1026 07:50:57.096067   26934 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:57.096081   26934 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:57.096467   26934 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:57.114275   26934 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:57.114336   26934 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:57.131621   26934 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:57.230868   26934 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:57.230962   26934 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:57.260188   26934 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:57.260207   26934 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:57.260210   26934 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:57.260213   26934 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:57.260216   26934 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:57.260220   26934 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:57.260222   26934 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:57.260224   26934 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:57.260227   26934 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:57.260231   26934 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:57.260234   26934 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:57.260236   26934 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:57.260238   26934 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:57.260240   26934 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:57.260244   26934 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:57.260266   26934 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:57.260271   26934 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:57.260277   26934 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:57.260281   26934 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:57.260285   26934 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:57.260292   26934 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:57.260295   26934 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:57.260297   26934 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:57.260300   26934 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:57.260308   26934 cri.go:89] found id: ""
	I1026 07:50:57.260343   26934 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:57.274844   26934 out.go:203] 
	W1026 07:50:57.276237   26934 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:57.276283   26934 out.go:285] * 
	* 
	W1026 07:50:57.279237   26934 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:57.280684   26934 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (243.56855ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:57.337896   26995 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:57.338138   26995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:57.338148   26995 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:57.338152   26995 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:57.338405   26995 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:57.338637   26995 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:57.338949   26995 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:57.338961   26995 addons.go:606] checking whether the cluster is paused
	I1026 07:50:57.339050   26995 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:57.339061   26995 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:57.339524   26995 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:57.357894   26995 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:57.357951   26995 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:57.375652   26995 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:57.473982   26995 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:57.474051   26995 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:57.502272   26995 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:57.502308   26995 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:57.502314   26995 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:57.502318   26995 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:57.502322   26995 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:57.502327   26995 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:57.502331   26995 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:57.502334   26995 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:57.502338   26995 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:57.502352   26995 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:57.502357   26995 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:57.502361   26995 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:57.502365   26995 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:57.502370   26995 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:57.502375   26995 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:57.502392   26995 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:57.502402   26995 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:57.502408   26995 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:57.502412   26995 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:57.502416   26995 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:57.502420   26995 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:57.502426   26995 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:57.502431   26995 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:57.502438   26995 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:57.502443   26995 cri.go:89] found id: ""
	I1026 07:50:57.502507   26995 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:57.519065   26995 out.go:203] 
	W1026 07:50:57.520539   26995 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:57.520571   26995 out.go:285] * 
	* 
	W1026 07:50:57.523933   26995 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:57.525416   26995 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (50.32s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-610291 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-610291 --alsologtostderr -v=1: exit status 11 (257.019044ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:07.263976   22953 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:07.264143   22953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:07.264153   22953 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:07.264157   22953 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:07.264349   22953 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:07.264619   22953 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:07.265036   22953 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:07.265054   22953 addons.go:606] checking whether the cluster is paused
	I1026 07:50:07.265178   22953 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:07.265202   22953 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:07.265720   22953 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:07.286596   22953 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:07.286657   22953 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:07.305516   22953 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:07.409449   22953 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:07.409518   22953 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:07.439622   22953 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:07.439643   22953 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:07.439649   22953 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:07.439656   22953 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:07.439670   22953 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:07.439674   22953 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:07.439678   22953 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:07.439681   22953 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:07.439685   22953 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:07.439692   22953 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:07.439696   22953 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:07.439700   22953 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:07.439704   22953 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:07.439709   22953 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:07.439713   22953 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:07.439739   22953 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:07.439748   22953 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:07.439753   22953 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:07.439757   22953 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:07.439761   22953 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:07.439769   22953 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:07.439772   22953 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:07.439776   22953 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:07.439786   22953 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:07.439790   22953 cri.go:89] found id: ""
	I1026 07:50:07.439832   22953 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:07.453718   22953 out.go:203] 
	W1026 07:50:07.454906   22953 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:07Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:07Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:07.454942   22953 out.go:285] * 
	* 
	W1026 07:50:07.457969   22953 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:07.459270   22953 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-610291 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-610291
helpers_test.go:243: (dbg) docker inspect addons-610291:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111",
	        "Created": "2025-10-26T07:47:45.843572466Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 14921,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T07:47:45.87708592Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111/hostname",
	        "HostsPath": "/var/lib/docker/containers/709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111/hosts",
	        "LogPath": "/var/lib/docker/containers/709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111/709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111-json.log",
	        "Name": "/addons-610291",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-610291:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-610291",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "709e79e538aa8c03bc05507d147fa486e1e6f491707fc965e67ba1496d72f111",
	                "LowerDir": "/var/lib/docker/overlay2/0ccd18ff4f865e14ae158aa6fe24098029a52bf722dfd5dad0e63afaa339bba4-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0ccd18ff4f865e14ae158aa6fe24098029a52bf722dfd5dad0e63afaa339bba4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0ccd18ff4f865e14ae158aa6fe24098029a52bf722dfd5dad0e63afaa339bba4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0ccd18ff4f865e14ae158aa6fe24098029a52bf722dfd5dad0e63afaa339bba4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-610291",
	                "Source": "/var/lib/docker/volumes/addons-610291/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-610291",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-610291",
	                "name.minikube.sigs.k8s.io": "addons-610291",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5a7c7949bbcf1f10ee54d165f96e838f5624bf03cdc69b2f5246e545b1740dc8",
	            "SandboxKey": "/var/run/docker/netns/5a7c7949bbcf",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-610291": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:30:8d:e9:f2:22",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7b22a427e8709e846cfa922d1f5e5433a05ebedc13e9c92c84d3699672c9349c",
	                    "EndpointID": "bf64b31b3bc6b0103283a9ae71065d9e07ab27c3ea6e5c4119e195f6aafed183",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-610291",
	                        "709e79e538aa"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-610291 -n addons-610291
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-610291 logs -n 25: (1.129366747s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-095815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-095815   │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ delete  │ -p download-only-095815                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-095815   │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ start   │ -o=json --download-only -p download-only-460564 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-460564   │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ delete  │ -p download-only-460564                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-460564   │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ delete  │ -p download-only-095815                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-095815   │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ delete  │ -p download-only-460564                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-460564   │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ start   │ --download-only -p download-docker-893358 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-893358 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ delete  │ -p download-docker-893358                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-893358 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ start   │ --download-only -p binary-mirror-916619 --alsologtostderr --binary-mirror http://127.0.0.1:36125 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-916619   │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ delete  │ -p binary-mirror-916619                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-916619   │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ addons  │ enable dashboard -p addons-610291                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-610291          │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ addons  │ disable dashboard -p addons-610291                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-610291          │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ start   │ -p addons-610291 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-610291          │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:49 UTC │
	│ addons  │ addons-610291 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-610291          │ jenkins │ v1.37.0 │ 26 Oct 25 07:49 UTC │                     │
	│ addons  │ addons-610291 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-610291          │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	│ addons  │ enable headlamp -p addons-610291 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-610291          │ jenkins │ v1.37.0 │ 26 Oct 25 07:50 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 07:47:21
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 07:47:21.279595   14247 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:47:21.279703   14247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:21.279712   14247 out.go:374] Setting ErrFile to fd 2...
	I1026 07:47:21.279715   14247 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:21.279905   14247 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:47:21.280428   14247 out.go:368] Setting JSON to false
	I1026 07:47:21.281204   14247 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1792,"bootTime":1761463049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 07:47:21.281306   14247 start.go:141] virtualization: kvm guest
	I1026 07:47:21.283307   14247 out.go:179] * [addons-610291] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 07:47:21.284958   14247 notify.go:220] Checking for updates...
	I1026 07:47:21.284979   14247 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 07:47:21.286474   14247 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 07:47:21.287828   14247 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 07:47:21.289486   14247 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 07:47:21.290677   14247 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 07:47:21.291791   14247 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 07:47:21.293219   14247 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 07:47:21.316288   14247 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 07:47:21.316387   14247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 07:47:21.374931   14247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-26 07:47:21.365489879 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 07:47:21.375037   14247 docker.go:318] overlay module found
	I1026 07:47:21.376723   14247 out.go:179] * Using the docker driver based on user configuration
	I1026 07:47:21.377857   14247 start.go:305] selected driver: docker
	I1026 07:47:21.377873   14247 start.go:925] validating driver "docker" against <nil>
	I1026 07:47:21.377882   14247 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 07:47:21.378451   14247 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 07:47:21.429528   14247 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-10-26 07:47:21.420550362 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 07:47:21.429672   14247 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 07:47:21.429859   14247 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 07:47:21.431628   14247 out.go:179] * Using Docker driver with root privileges
	I1026 07:47:21.432809   14247 cni.go:84] Creating CNI manager for ""
	I1026 07:47:21.432879   14247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 07:47:21.432893   14247 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 07:47:21.432957   14247 start.go:349] cluster config:
	{Name:addons-610291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-610291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1026 07:47:21.434263   14247 out.go:179] * Starting "addons-610291" primary control-plane node in "addons-610291" cluster
	I1026 07:47:21.435379   14247 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 07:47:21.436511   14247 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 07:47:21.437649   14247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 07:47:21.437691   14247 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 07:47:21.437717   14247 cache.go:58] Caching tarball of preloaded images
	I1026 07:47:21.437771   14247 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 07:47:21.437791   14247 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 07:47:21.437802   14247 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 07:47:21.438151   14247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/config.json ...
	I1026 07:47:21.438175   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/config.json: {Name:mkcca355575390147054e49c3b0ee0e3923d5755 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:21.453391   14247 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 to local cache
	I1026 07:47:21.453510   14247 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory
	I1026 07:47:21.453530   14247 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local cache directory, skipping pull
	I1026 07:47:21.453538   14247 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in cache, skipping pull
	I1026 07:47:21.453548   14247 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 as a tarball
	I1026 07:47:21.453558   14247 cache.go:165] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from local cache
	I1026 07:47:34.093122   14247 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 from cached tarball
	I1026 07:47:34.093150   14247 cache.go:232] Successfully downloaded all kic artifacts
	I1026 07:47:34.093192   14247 start.go:360] acquireMachinesLock for addons-610291: {Name:mk5ae23e2a114127e4eb4fc97f79aafc5ce2edba Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 07:47:34.093321   14247 start.go:364] duration metric: took 108.763µs to acquireMachinesLock for "addons-610291"
	I1026 07:47:34.093353   14247 start.go:93] Provisioning new machine with config: &{Name:addons-610291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-610291 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 07:47:34.093418   14247 start.go:125] createHost starting for "" (driver="docker")
	I1026 07:47:34.095319   14247 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1026 07:47:34.095615   14247 start.go:159] libmachine.API.Create for "addons-610291" (driver="docker")
	I1026 07:47:34.095654   14247 client.go:168] LocalClient.Create starting
	I1026 07:47:34.095777   14247 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem
	I1026 07:47:34.237140   14247 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem
	I1026 07:47:34.558729   14247 cli_runner.go:164] Run: docker network inspect addons-610291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 07:47:34.575242   14247 cli_runner.go:211] docker network inspect addons-610291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 07:47:34.575326   14247 network_create.go:284] running [docker network inspect addons-610291] to gather additional debugging logs...
	I1026 07:47:34.575350   14247 cli_runner.go:164] Run: docker network inspect addons-610291
	W1026 07:47:34.590663   14247 cli_runner.go:211] docker network inspect addons-610291 returned with exit code 1
	I1026 07:47:34.590692   14247 network_create.go:287] error running [docker network inspect addons-610291]: docker network inspect addons-610291: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-610291 not found
	I1026 07:47:34.590705   14247 network_create.go:289] output of [docker network inspect addons-610291]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-610291 not found
	
	** /stderr **
	I1026 07:47:34.590824   14247 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 07:47:34.606988   14247 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017792a0}
	I1026 07:47:34.607059   14247 network_create.go:124] attempt to create docker network addons-610291 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1026 07:47:34.607102   14247 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-610291 addons-610291
	I1026 07:47:34.660608   14247 network_create.go:108] docker network addons-610291 192.168.49.0/24 created
	I1026 07:47:34.660638   14247 kic.go:121] calculated static IP "192.168.49.2" for the "addons-610291" container
	I1026 07:47:34.660719   14247 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 07:47:34.677195   14247 cli_runner.go:164] Run: docker volume create addons-610291 --label name.minikube.sigs.k8s.io=addons-610291 --label created_by.minikube.sigs.k8s.io=true
	I1026 07:47:34.694118   14247 oci.go:103] Successfully created a docker volume addons-610291
	I1026 07:47:34.694185   14247 cli_runner.go:164] Run: docker run --rm --name addons-610291-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-610291 --entrypoint /usr/bin/test -v addons-610291:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 07:47:41.497732   14247 cli_runner.go:217] Completed: docker run --rm --name addons-610291-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-610291 --entrypoint /usr/bin/test -v addons-610291:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (6.803503345s)
	I1026 07:47:41.497766   14247 oci.go:107] Successfully prepared a docker volume addons-610291
	I1026 07:47:41.497794   14247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 07:47:41.497811   14247 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 07:47:41.497891   14247 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-610291:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 07:47:45.772375   14247 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-610291:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.274420193s)
	I1026 07:47:45.772403   14247 kic.go:203] duration metric: took 4.274587495s to extract preloaded images to volume ...
	W1026 07:47:45.772499   14247 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 07:47:45.772539   14247 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 07:47:45.772593   14247 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 07:47:45.828381   14247 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-610291 --name addons-610291 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-610291 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-610291 --network addons-610291 --ip 192.168.49.2 --volume addons-610291:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 07:47:46.123437   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Running}}
	I1026 07:47:46.141900   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:47:46.160299   14247 cli_runner.go:164] Run: docker exec addons-610291 stat /var/lib/dpkg/alternatives/iptables
	I1026 07:47:46.205212   14247 oci.go:144] the created container "addons-610291" has a running status.
	I1026 07:47:46.205238   14247 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa...
	I1026 07:47:46.616196   14247 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 07:47:46.642748   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:47:46.661100   14247 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 07:47:46.661123   14247 kic_runner.go:114] Args: [docker exec --privileged addons-610291 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 07:47:46.706187   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:47:46.723342   14247 machine.go:93] provisionDockerMachine start ...
	I1026 07:47:46.723434   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:46.741573   14247 main.go:141] libmachine: Using SSH client type: native
	I1026 07:47:46.741823   14247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1026 07:47:46.741839   14247 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 07:47:46.881944   14247 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-610291
	
	I1026 07:47:46.881978   14247 ubuntu.go:182] provisioning hostname "addons-610291"
	I1026 07:47:46.882052   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:46.899186   14247 main.go:141] libmachine: Using SSH client type: native
	I1026 07:47:46.899425   14247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1026 07:47:46.899442   14247 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-610291 && echo "addons-610291" | sudo tee /etc/hostname
	I1026 07:47:47.046199   14247 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-610291
	
	I1026 07:47:47.046289   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:47.064409   14247 main.go:141] libmachine: Using SSH client type: native
	I1026 07:47:47.064668   14247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1026 07:47:47.064693   14247 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-610291' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-610291/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-610291' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 07:47:47.202657   14247 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 07:47:47.202689   14247 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 07:47:47.202734   14247 ubuntu.go:190] setting up certificates
	I1026 07:47:47.202749   14247 provision.go:84] configureAuth start
	I1026 07:47:47.202807   14247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-610291
	I1026 07:47:47.220449   14247 provision.go:143] copyHostCerts
	I1026 07:47:47.220511   14247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 07:47:47.220619   14247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 07:47:47.220678   14247 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 07:47:47.220728   14247 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.addons-610291 san=[127.0.0.1 192.168.49.2 addons-610291 localhost minikube]
	I1026 07:47:47.401519   14247 provision.go:177] copyRemoteCerts
	I1026 07:47:47.401570   14247 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 07:47:47.401600   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:47.418631   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:47:47.517204   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 07:47:47.534807   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 07:47:47.550881   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 07:47:47.567695   14247 provision.go:87] duration metric: took 364.932184ms to configureAuth
	I1026 07:47:47.567718   14247 ubuntu.go:206] setting minikube options for container-runtime
	I1026 07:47:47.567852   14247 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:47:47.567936   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:47.585451   14247 main.go:141] libmachine: Using SSH client type: native
	I1026 07:47:47.585688   14247 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1026 07:47:47.585714   14247 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 07:47:47.833685   14247 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 07:47:47.833706   14247 machine.go:96] duration metric: took 1.110341315s to provisionDockerMachine
	I1026 07:47:47.833716   14247 client.go:171] duration metric: took 13.738051438s to LocalClient.Create
	I1026 07:47:47.833735   14247 start.go:167] duration metric: took 13.738119331s to libmachine.API.Create "addons-610291"
	I1026 07:47:47.833744   14247 start.go:293] postStartSetup for "addons-610291" (driver="docker")
	I1026 07:47:47.833756   14247 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 07:47:47.833810   14247 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 07:47:47.833858   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:47.851692   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:47:47.952937   14247 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 07:47:47.956352   14247 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 07:47:47.956376   14247 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 07:47:47.956386   14247 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 07:47:47.956444   14247 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 07:47:47.956471   14247 start.go:296] duration metric: took 122.720964ms for postStartSetup
	I1026 07:47:47.956761   14247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-610291
	I1026 07:47:47.973604   14247 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/config.json ...
	I1026 07:47:47.973852   14247 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 07:47:47.973892   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:47.990955   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:47:48.086398   14247 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 07:47:48.090727   14247 start.go:128] duration metric: took 13.997297631s to createHost
	I1026 07:47:48.090748   14247 start.go:83] releasing machines lock for "addons-610291", held for 13.997410793s
	I1026 07:47:48.090801   14247 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-610291
	I1026 07:47:48.109673   14247 ssh_runner.go:195] Run: cat /version.json
	I1026 07:47:48.109731   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:48.109760   14247 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 07:47:48.109826   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:47:48.127933   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:47:48.128615   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:47:48.279079   14247 ssh_runner.go:195] Run: systemctl --version
	I1026 07:47:48.285415   14247 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 07:47:48.319345   14247 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 07:47:48.323943   14247 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 07:47:48.324002   14247 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 07:47:48.349760   14247 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 07:47:48.349791   14247 start.go:495] detecting cgroup driver to use...
	I1026 07:47:48.349818   14247 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 07:47:48.349864   14247 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 07:47:48.365050   14247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 07:47:48.377154   14247 docker.go:218] disabling cri-docker service (if available) ...
	I1026 07:47:48.377209   14247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 07:47:48.392886   14247 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 07:47:48.409920   14247 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 07:47:48.488743   14247 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 07:47:48.575380   14247 docker.go:234] disabling docker service ...
	I1026 07:47:48.575454   14247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 07:47:48.593462   14247 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 07:47:48.606110   14247 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 07:47:48.687992   14247 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 07:47:48.770818   14247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 07:47:48.783098   14247 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 07:47:48.797105   14247 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 07:47:48.797154   14247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.806959   14247 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 07:47:48.807013   14247 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.815303   14247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.823310   14247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.831403   14247 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 07:47:48.839256   14247 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.847534   14247 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.860404   14247 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 07:47:48.868925   14247 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 07:47:48.875694   14247 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1026 07:47:48.875737   14247 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1026 07:47:48.887148   14247 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 07:47:48.894200   14247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 07:47:48.971031   14247 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 07:47:49.074879   14247 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 07:47:49.074947   14247 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 07:47:49.078665   14247 start.go:563] Will wait 60s for crictl version
	I1026 07:47:49.078732   14247 ssh_runner.go:195] Run: which crictl
	I1026 07:47:49.082169   14247 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 07:47:49.106368   14247 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 07:47:49.106502   14247 ssh_runner.go:195] Run: crio --version
	I1026 07:47:49.133506   14247 ssh_runner.go:195] Run: crio --version
	I1026 07:47:49.163483   14247 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 07:47:49.164522   14247 cli_runner.go:164] Run: docker network inspect addons-610291 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 07:47:49.181305   14247 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1026 07:47:49.185230   14247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 07:47:49.195135   14247 kubeadm.go:883] updating cluster {Name:addons-610291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-610291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 07:47:49.195225   14247 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 07:47:49.195284   14247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 07:47:49.223719   14247 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 07:47:49.223738   14247 crio.go:433] Images already preloaded, skipping extraction
	I1026 07:47:49.223781   14247 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 07:47:49.246859   14247 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 07:47:49.246879   14247 cache_images.go:85] Images are preloaded, skipping loading
	I1026 07:47:49.246885   14247 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1026 07:47:49.246960   14247 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-610291 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-610291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 07:47:49.247017   14247 ssh_runner.go:195] Run: crio config
	I1026 07:47:49.291433   14247 cni.go:84] Creating CNI manager for ""
	I1026 07:47:49.291455   14247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 07:47:49.291479   14247 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 07:47:49.291507   14247 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-610291 NodeName:addons-610291 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 07:47:49.291653   14247 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-610291"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 07:47:49.291728   14247 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 07:47:49.299600   14247 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 07:47:49.299662   14247 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 07:47:49.306882   14247 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 07:47:49.318904   14247 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 07:47:49.334378   14247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1026 07:47:49.347269   14247 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1026 07:47:49.351037   14247 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 07:47:49.360898   14247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 07:47:49.440802   14247 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 07:47:49.465707   14247 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291 for IP: 192.168.49.2
	I1026 07:47:49.465725   14247 certs.go:195] generating shared ca certs ...
	I1026 07:47:49.465739   14247 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:49.465844   14247 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 07:47:49.751724   14247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt ...
	I1026 07:47:49.751756   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt: {Name:mk22a5729f47ea6d5d732bc99ea3bee5794d62ce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:49.751925   14247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key ...
	I1026 07:47:49.751936   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key: {Name:mkee1e95054c760f9f30ea61b9e625b3b8c7e485 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:49.752025   14247 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 07:47:50.151821   14247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt ...
	I1026 07:47:50.151849   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt: {Name:mkf2594a4b511b04a346ce370fe4d575bea18e03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.152020   14247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key ...
	I1026 07:47:50.152032   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key: {Name:mk609c4d5e45bb36cc12f3827342395af5d820f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.152103   14247 certs.go:257] generating profile certs ...
	I1026 07:47:50.152158   14247 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.key
	I1026 07:47:50.152173   14247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt with IP's: []
	I1026 07:47:50.215686   14247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt ...
	I1026 07:47:50.215714   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: {Name:mkecc3d3e94268147dd2d8cdbd70e447ff58bc5a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.215866   14247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.key ...
	I1026 07:47:50.215880   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.key: {Name:mkd578ae209befaa9b0d8558f5ed038dd7e81266 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.215952   14247 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.key.546045b5
	I1026 07:47:50.215972   14247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.crt.546045b5 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1026 07:47:50.432125   14247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.crt.546045b5 ...
	I1026 07:47:50.432153   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.crt.546045b5: {Name:mk4d9a750d8ada4e8e008c2c1ddad70a2f3e0625 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.432318   14247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.key.546045b5 ...
	I1026 07:47:50.432333   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.key.546045b5: {Name:mkf99c16a95038b7b0dfaebb9b18bcf2232ea333 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.432405   14247 certs.go:382] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.crt.546045b5 -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.crt
	I1026 07:47:50.432475   14247 certs.go:386] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.key.546045b5 -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.key
	I1026 07:47:50.432524   14247 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.key
	I1026 07:47:50.432541   14247 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.crt with IP's: []
	I1026 07:47:50.746921   14247 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.crt ...
	I1026 07:47:50.746951   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.crt: {Name:mkbbdf7bed8f765e54fad832da39c8a295138c7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.747111   14247 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.key ...
	I1026 07:47:50.747122   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.key: {Name:mk5b33d8803f9c5929454310a9ea4a5e1c8050aa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:47:50.747297   14247 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 07:47:50.747332   14247 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 07:47:50.747361   14247 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 07:47:50.747383   14247 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 07:47:50.747953   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 07:47:50.765823   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 07:47:50.782551   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 07:47:50.799363   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 07:47:50.815695   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 07:47:50.831965   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 07:47:50.848439   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 07:47:50.864675   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 07:47:50.880826   14247 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 07:47:50.899167   14247 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 07:47:50.911131   14247 ssh_runner.go:195] Run: openssl version
	I1026 07:47:50.916961   14247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 07:47:50.928040   14247 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 07:47:50.932127   14247 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 07:47:50.932180   14247 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 07:47:50.970989   14247 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 07:47:50.979521   14247 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 07:47:50.982986   14247 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 07:47:50.983037   14247 kubeadm.go:400] StartCluster: {Name:addons-610291 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-610291 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 07:47:50.983120   14247 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:47:50.983176   14247 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:47:51.008856   14247 cri.go:89] found id: ""
	I1026 07:47:51.008913   14247 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 07:47:51.017006   14247 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 07:47:51.024683   14247 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 07:47:51.024740   14247 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 07:47:51.032392   14247 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 07:47:51.032419   14247 kubeadm.go:157] found existing configuration files:
	
	I1026 07:47:51.032461   14247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 07:47:51.040194   14247 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 07:47:51.040236   14247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 07:47:51.047357   14247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 07:47:51.054704   14247 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 07:47:51.054747   14247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 07:47:51.061735   14247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 07:47:51.068928   14247 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 07:47:51.068991   14247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 07:47:51.075673   14247 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 07:47:51.082719   14247 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 07:47:51.082776   14247 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 07:47:51.089481   14247 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 07:47:51.144127   14247 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 07:47:51.198085   14247 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 07:48:00.765490   14247 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 07:48:00.765570   14247 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 07:48:00.765694   14247 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 07:48:00.765767   14247 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 07:48:00.765811   14247 kubeadm.go:318] OS: Linux
	I1026 07:48:00.765850   14247 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 07:48:00.765889   14247 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 07:48:00.765929   14247 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 07:48:00.765968   14247 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 07:48:00.766021   14247 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 07:48:00.766103   14247 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 07:48:00.766186   14247 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 07:48:00.766283   14247 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 07:48:00.766401   14247 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 07:48:00.766534   14247 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 07:48:00.766673   14247 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 07:48:00.766761   14247 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 07:48:00.768613   14247 out.go:252]   - Generating certificates and keys ...
	I1026 07:48:00.768688   14247 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 07:48:00.768779   14247 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 07:48:00.768861   14247 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 07:48:00.768932   14247 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 07:48:00.768994   14247 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 07:48:00.769055   14247 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 07:48:00.769120   14247 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 07:48:00.769266   14247 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [addons-610291 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 07:48:00.769342   14247 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 07:48:00.769470   14247 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [addons-610291 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1026 07:48:00.769563   14247 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 07:48:00.769649   14247 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 07:48:00.769713   14247 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 07:48:00.769798   14247 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 07:48:00.769850   14247 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 07:48:00.769912   14247 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 07:48:00.769958   14247 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 07:48:00.770022   14247 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 07:48:00.770101   14247 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 07:48:00.770212   14247 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 07:48:00.770317   14247 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 07:48:00.771814   14247 out.go:252]   - Booting up control plane ...
	I1026 07:48:00.771890   14247 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 07:48:00.771985   14247 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 07:48:00.772079   14247 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 07:48:00.772199   14247 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 07:48:00.772335   14247 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 07:48:00.772446   14247 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 07:48:00.772521   14247 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 07:48:00.772555   14247 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 07:48:00.772685   14247 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 07:48:00.772783   14247 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 07:48:00.772846   14247 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001888489s
	I1026 07:48:00.772933   14247 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 07:48:00.773001   14247 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1026 07:48:00.773076   14247 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 07:48:00.773151   14247 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 07:48:00.773216   14247 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.533951099s
	I1026 07:48:00.773310   14247 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.92003129s
	I1026 07:48:00.773411   14247 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 3.501436204s
	I1026 07:48:00.773524   14247 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 07:48:00.773648   14247 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 07:48:00.773730   14247 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 07:48:00.773921   14247 kubeadm.go:318] [mark-control-plane] Marking the node addons-610291 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 07:48:00.773987   14247 kubeadm.go:318] [bootstrap-token] Using token: aa1fmf.q9mlltjnhg1c496f
	I1026 07:48:00.775507   14247 out.go:252]   - Configuring RBAC rules ...
	I1026 07:48:00.775605   14247 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 07:48:00.775699   14247 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 07:48:00.775839   14247 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 07:48:00.775955   14247 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 07:48:00.776088   14247 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 07:48:00.776176   14247 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 07:48:00.776307   14247 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 07:48:00.776347   14247 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 07:48:00.776387   14247 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 07:48:00.776393   14247 kubeadm.go:318] 
	I1026 07:48:00.776457   14247 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 07:48:00.776463   14247 kubeadm.go:318] 
	I1026 07:48:00.776550   14247 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 07:48:00.776562   14247 kubeadm.go:318] 
	I1026 07:48:00.776596   14247 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 07:48:00.776677   14247 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 07:48:00.776741   14247 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 07:48:00.776749   14247 kubeadm.go:318] 
	I1026 07:48:00.776793   14247 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 07:48:00.776801   14247 kubeadm.go:318] 
	I1026 07:48:00.776840   14247 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 07:48:00.776846   14247 kubeadm.go:318] 
	I1026 07:48:00.776889   14247 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 07:48:00.776954   14247 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 07:48:00.777012   14247 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 07:48:00.777027   14247 kubeadm.go:318] 
	I1026 07:48:00.777123   14247 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 07:48:00.777193   14247 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 07:48:00.777199   14247 kubeadm.go:318] 
	I1026 07:48:00.777304   14247 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token aa1fmf.q9mlltjnhg1c496f \
	I1026 07:48:00.777419   14247 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d \
	I1026 07:48:00.777441   14247 kubeadm.go:318] 	--control-plane 
	I1026 07:48:00.777445   14247 kubeadm.go:318] 
	I1026 07:48:00.777537   14247 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 07:48:00.777548   14247 kubeadm.go:318] 
	I1026 07:48:00.777677   14247 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token aa1fmf.q9mlltjnhg1c496f \
	I1026 07:48:00.777909   14247 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d 
	I1026 07:48:00.777926   14247 cni.go:84] Creating CNI manager for ""
	I1026 07:48:00.777932   14247 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 07:48:00.779366   14247 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 07:48:00.780670   14247 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 07:48:00.784746   14247 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 07:48:00.784765   14247 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 07:48:00.797362   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 07:48:00.992886   14247 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 07:48:00.992957   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:00.993025   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-610291 minikube.k8s.io/updated_at=2025_10_26T07_48_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=addons-610291 minikube.k8s.io/primary=true
	I1026 07:48:01.069083   14247 ops.go:34] apiserver oom_adj: -16
	I1026 07:48:01.069221   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:01.570185   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:02.069962   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:02.569328   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:03.069477   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:03.569520   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:04.069713   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:04.570045   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:05.070328   14247 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 07:48:05.132624   14247 kubeadm.go:1113] duration metric: took 4.139722629s to wait for elevateKubeSystemPrivileges
	I1026 07:48:05.132657   14247 kubeadm.go:402] duration metric: took 14.149622166s to StartCluster
	I1026 07:48:05.132672   14247 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:05.132801   14247 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 07:48:05.133393   14247 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 07:48:05.133646   14247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 07:48:05.133657   14247 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 07:48:05.133676   14247 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1026 07:48:05.133816   14247 addons.go:69] Setting default-storageclass=true in profile "addons-610291"
	I1026 07:48:05.133820   14247 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-610291"
	I1026 07:48:05.133833   14247 addons.go:69] Setting volcano=true in profile "addons-610291"
	I1026 07:48:05.133836   14247 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-610291"
	I1026 07:48:05.133844   14247 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-610291"
	I1026 07:48:05.133844   14247 addons.go:69] Setting registry-creds=true in profile "addons-610291"
	I1026 07:48:05.133866   14247 addons.go:69] Setting storage-provisioner=true in profile "addons-610291"
	I1026 07:48:05.133871   14247 addons.go:238] Setting addon registry-creds=true in "addons-610291"
	I1026 07:48:05.133873   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.133891   14247 addons.go:238] Setting addon storage-provisioner=true in "addons-610291"
	I1026 07:48:05.133898   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.133910   14247 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:48:05.133912   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.133792   14247 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-610291"
	I1026 07:48:05.134130   14247 addons.go:69] Setting gcp-auth=true in profile "addons-610291"
	I1026 07:48:05.134161   14247 mustload.go:65] Loading cluster: addons-610291
	I1026 07:48:05.134175   14247 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-610291"
	I1026 07:48:05.134207   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.134221   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.134363   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.134386   14247 addons.go:69] Setting inspektor-gadget=true in profile "addons-610291"
	I1026 07:48:05.134419   14247 addons.go:69] Setting ingress-dns=true in profile "addons-610291"
	I1026 07:48:05.134433   14247 addons.go:238] Setting addon ingress-dns=true in "addons-610291"
	I1026 07:48:05.134438   14247 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:48:05.134448   14247 addons.go:69] Setting metrics-server=true in profile "addons-610291"
	I1026 07:48:05.134450   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.134460   14247 addons.go:238] Setting addon metrics-server=true in "addons-610291"
	I1026 07:48:05.134483   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.134498   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.134718   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.133800   14247 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-610291"
	I1026 07:48:05.134742   14247 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-610291"
	I1026 07:48:05.134769   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.134957   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.134993   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.134720   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.133848   14247 addons.go:238] Setting addon volcano=true in "addons-610291"
	I1026 07:48:05.136016   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.134441   14247 addons.go:238] Setting addon inspektor-gadget=true in "addons-610291"
	I1026 07:48:05.136209   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.134411   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.136782   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.136786   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.133831   14247 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-610291"
	I1026 07:48:05.139429   14247 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-610291"
	I1026 07:48:05.139867   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.133804   14247 addons.go:69] Setting cloud-spanner=true in profile "addons-610291"
	I1026 07:48:05.140407   14247 addons.go:238] Setting addon cloud-spanner=true in "addons-610291"
	I1026 07:48:05.140445   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.133812   14247 addons.go:69] Setting registry=true in profile "addons-610291"
	I1026 07:48:05.140674   14247 addons.go:238] Setting addon registry=true in "addons-610291"
	I1026 07:48:05.140735   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.133856   14247 addons.go:69] Setting volumesnapshots=true in profile "addons-610291"
	I1026 07:48:05.141097   14247 addons.go:238] Setting addon volumesnapshots=true in "addons-610291"
	I1026 07:48:05.141130   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.141598   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.141859   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.133812   14247 addons.go:69] Setting ingress=true in profile "addons-610291"
	I1026 07:48:05.143278   14247 out.go:179] * Verifying Kubernetes components...
	I1026 07:48:05.133801   14247 addons.go:69] Setting yakd=true in profile "addons-610291"
	I1026 07:48:05.143388   14247 addons.go:238] Setting addon yakd=true in "addons-610291"
	I1026 07:48:05.143322   14247 addons.go:238] Setting addon ingress=true in "addons-610291"
	I1026 07:48:05.143416   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.143458   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.144063   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.144071   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.147572   14247 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 07:48:05.149505   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.151309   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.188368   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.202026   14247 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1026 07:48:05.202113   14247 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1026 07:48:05.203741   14247 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 07:48:05.203768   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1026 07:48:05.203783   14247 addons.go:238] Setting addon default-storageclass=true in "addons-610291"
	I1026 07:48:05.203822   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.203963   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.203992   14247 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.4
	I1026 07:48:05.204055   14247 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1026 07:48:05.204074   14247 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1026 07:48:05.204131   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.205215   14247 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 07:48:05.205236   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1026 07:48:05.205293   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.205296   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.217623   14247 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-610291"
	I1026 07:48:05.217696   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:05.218366   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:05.218830   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1026 07:48:05.220265   14247 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1026 07:48:05.220280   14247 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1026 07:48:05.220354   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.220454   14247 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.42
	I1026 07:48:05.221843   14247 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I1026 07:48:05.222628   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1026 07:48:05.222721   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.223599   14247 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1026 07:48:05.228183   14247 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 07:48:05.228199   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1026 07:48:05.228265   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.233921   14247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1026 07:48:05.234022   14247 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.45.0
	W1026 07:48:05.235160   14247 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1026 07:48:05.235839   14247 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I1026 07:48:05.235857   14247 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I1026 07:48:05.235958   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.236793   14247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 07:48:05.238448   14247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 07:48:05.239730   14247 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 07:48:05.239744   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1026 07:48:05.239793   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.239940   14247 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 07:48:05.243323   14247 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 07:48:05.243343   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 07:48:05.243393   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.249907   14247 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1026 07:48:05.250428   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1026 07:48:05.251102   14247 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1026 07:48:05.251106   14247 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1026 07:48:05.251122   14247 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1026 07:48:05.251172   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.253541   14247 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 07:48:05.253728   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1026 07:48:05.253799   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.257226   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1026 07:48:05.258729   14247 out.go:179]   - Using image docker.io/registry:3.0.0
	I1026 07:48:05.260280   14247 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1026 07:48:05.260481   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.261502   14247 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I1026 07:48:05.261820   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1026 07:48:05.262050   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.262092   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1026 07:48:05.263819   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1026 07:48:05.266837   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1026 07:48:05.268703   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1026 07:48:05.271072   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1026 07:48:05.273011   14247 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1026 07:48:05.273288   14247 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 07:48:05.273653   14247 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 07:48:05.273765   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.274411   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1026 07:48:05.274429   14247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1026 07:48:05.274585   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.275619   14247 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1026 07:48:05.277022   14247 out.go:179]   - Using image docker.io/busybox:stable
	I1026 07:48:05.278415   14247 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 07:48:05.278430   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1026 07:48:05.278479   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:05.290985   14247 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 07:48:05.291948   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.292590   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.321835   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.327089   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.327794   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.329122   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.329421   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.334404   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.334489   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.334843   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.342759   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.343471   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.344621   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:05.346235   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	W1026 07:48:05.353365   14247 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1026 07:48:05.353398   14247 retry.go:31] will retry after 162.155925ms: ssh: handshake failed: EOF
	I1026 07:48:05.371901   14247 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 07:48:05.428281   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1026 07:48:05.461601   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1026 07:48:05.478365   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1026 07:48:05.482442   14247 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1026 07:48:05.482464   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1026 07:48:05.491559   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 07:48:05.497762   14247 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:05.497793   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1026 07:48:05.509424   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 07:48:05.512534   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1026 07:48:05.514758   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1026 07:48:05.517557   14247 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1026 07:48:05.517580   14247 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1026 07:48:05.518231   14247 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1026 07:48:05.518265   14247 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1026 07:48:05.535306   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1026 07:48:05.535363   14247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1026 07:48:05.542779   14247 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I1026 07:48:05.542806   14247 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1026 07:48:05.552143   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1026 07:48:05.561883   14247 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1026 07:48:05.561908   14247 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1026 07:48:05.563296   14247 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1026 07:48:05.563317   14247 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1026 07:48:05.571930   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:05.572836   14247 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1026 07:48:05.572854   14247 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1026 07:48:05.596056   14247 node_ready.go:35] waiting up to 6m0s for node "addons-610291" to be "Ready" ...
	I1026 07:48:05.596604   14247 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1026 07:48:05.597754   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1026 07:48:05.597774   14247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1026 07:48:05.597846   14247 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1026 07:48:05.597888   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1026 07:48:05.599855   14247 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 07:48:05.599935   14247 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1026 07:48:05.633365   14247 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1026 07:48:05.633393   14247 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1026 07:48:05.641086   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1026 07:48:05.641438   14247 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1026 07:48:05.641527   14247 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1026 07:48:05.647658   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1026 07:48:05.650139   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1026 07:48:05.650158   14247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1026 07:48:05.684748   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1026 07:48:05.684772   14247 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1026 07:48:05.706070   14247 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1026 07:48:05.706096   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1026 07:48:05.721102   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1026 07:48:05.721194   14247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1026 07:48:05.724791   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1026 07:48:05.741192   14247 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 07:48:05.741211   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1026 07:48:05.761865   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1026 07:48:05.762221   14247 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1026 07:48:05.762274   14247 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1026 07:48:05.785817   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1026 07:48:05.807520   14247 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1026 07:48:05.807612   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1026 07:48:05.870530   14247 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1026 07:48:05.870555   14247 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1026 07:48:05.924584   14247 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1026 07:48:05.924663   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1026 07:48:05.976391   14247 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1026 07:48:05.976412   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1026 07:48:06.056616   14247 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 07:48:06.056643   14247 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1026 07:48:06.100405   14247 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-610291" context rescaled to 1 replicas
	I1026 07:48:06.111604   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1026 07:48:06.709763   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.194972988s)
	I1026 07:48:06.709802   14247 addons.go:479] Verifying addon ingress=true in "addons-610291"
	I1026 07:48:06.709900   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.157731349s)
	I1026 07:48:06.710697   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.138731209s)
	W1026 07:48:06.710735   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:06.710753   14247 retry.go:31] will retry after 150.812923ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:06.710794   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.06955818s)
	I1026 07:48:06.710823   14247 addons.go:479] Verifying addon metrics-server=true in "addons-610291"
	I1026 07:48:06.710880   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.063200762s)
	I1026 07:48:06.710896   14247 addons.go:479] Verifying addon registry=true in "addons-610291"
	I1026 07:48:06.711360   14247 out.go:179] * Verifying ingress addon...
	I1026 07:48:06.712956   14247 out.go:179] * Verifying registry addon...
	I1026 07:48:06.712976   14247 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-610291 service yakd-dashboard -n yakd-dashboard
	
	I1026 07:48:06.713696   14247 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1026 07:48:06.715240   14247 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1026 07:48:06.716615   14247 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 07:48:06.716949   14247 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 07:48:06.716965   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:06.862998   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:07.170847   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.384937461s)
	W1026 07:48:07.170892   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 07:48:07.170915   14247 retry.go:31] will retry after 180.789796ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1026 07:48:07.171156   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.059460086s)
	I1026 07:48:07.171191   14247 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-610291"
	I1026 07:48:07.173844   14247 out.go:179] * Verifying csi-hostpath-driver addon...
	I1026 07:48:07.176604   14247 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1026 07:48:07.180463   14247 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 07:48:07.180483   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:07.281369   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:07.281611   14247 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1026 07:48:07.281628   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:07.352754   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W1026 07:48:07.481456   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:07.481491   14247 retry.go:31] will retry after 231.837783ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 07:48:07.599636   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:07.680347   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:07.714349   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:07.716692   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:07.717615   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:08.179451   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:08.217172   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:08.217583   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:08.680560   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:08.717053   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:08.717709   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:09.179873   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:09.280562   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:09.280766   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:09.679527   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:09.717159   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:09.717551   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:09.838101   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.485304365s)
	I1026 07:48:09.838182   14247 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.123799085s)
	W1026 07:48:09.838225   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:09.838257   14247 retry.go:31] will retry after 457.886509ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 07:48:10.099158   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:10.179746   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:10.281094   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:10.281245   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:10.297303   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:10.681038   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:10.717359   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:10.717575   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 07:48:10.823399   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:10.823427   14247 retry.go:31] will retry after 1.248439599s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:11.180163   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:11.281282   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:11.281502   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:11.680633   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:11.717219   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:11.717831   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:12.072216   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:12.180576   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:12.281591   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:12.281756   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1026 07:48:12.593677   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:12.593707   14247 retry.go:31] will retry after 700.854454ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 07:48:12.598951   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:12.679615   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:12.717159   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:12.717833   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:12.799044   14247 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1026 07:48:12.799112   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:12.816164   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:12.930110   14247 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1026 07:48:12.942973   14247 addons.go:238] Setting addon gcp-auth=true in "addons-610291"
	I1026 07:48:12.943048   14247 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:48:12.943588   14247 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:48:12.962688   14247 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1026 07:48:12.962735   14247 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:48:12.979705   14247 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:48:13.077691   14247 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1026 07:48:13.078980   14247 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1026 07:48:13.080302   14247 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1026 07:48:13.080319   14247 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1026 07:48:13.093496   14247 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1026 07:48:13.093516   14247 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1026 07:48:13.106737   14247 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 07:48:13.106755   14247 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1026 07:48:13.119180   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1026 07:48:13.179181   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:13.216623   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:13.217475   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:13.294696   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:13.426675   14247 addons.go:479] Verifying addon gcp-auth=true in "addons-610291"
	I1026 07:48:13.428487   14247 out.go:179] * Verifying gcp-auth addon...
	I1026 07:48:13.430984   14247 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1026 07:48:13.434873   14247 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1026 07:48:13.434901   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:13.680825   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:13.716743   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:13.717404   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 07:48:13.848454   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:13.848487   14247 retry.go:31] will retry after 2.481579043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:13.933904   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:14.180271   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:14.216699   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:14.217478   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:14.434672   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:14.599113   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:14.679985   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:14.716854   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:14.718423   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:14.934009   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:15.180007   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:15.216614   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:15.217351   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:15.433824   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:15.679789   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:15.717311   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:15.717851   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:15.934819   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:16.180367   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:16.216698   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:16.217415   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:16.330812   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:16.433582   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:16.599424   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:16.679330   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:16.717069   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:16.717352   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 07:48:16.846176   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:16.846218   14247 retry.go:31] will retry after 3.360187984s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:16.933591   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:17.179544   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:17.217302   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:17.217744   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:17.434168   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:17.679925   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:17.716384   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:17.718047   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:17.933580   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:18.179482   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:18.217015   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:18.217474   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:18.434106   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:18.679909   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:18.716434   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:18.717789   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:18.934358   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:19.098831   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:19.179358   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:19.216867   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:19.217426   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:19.434416   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:19.679578   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:19.717059   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:19.717615   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:19.934480   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:20.179914   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:20.207025   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:20.217171   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:20.217768   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:20.434007   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:20.680619   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:20.717368   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:20.717793   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 07:48:20.725696   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:20.725721   14247 retry.go:31] will retry after 2.38893853s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:20.934342   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:21.179244   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:21.216742   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:21.217219   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:21.434278   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:21.598727   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:21.678947   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:21.716692   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:21.718195   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:21.933823   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:22.179688   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:22.217244   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:22.217746   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:22.434177   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:22.680207   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:22.716616   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:22.717240   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:22.933701   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:23.115300   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:23.180628   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:23.217124   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:23.217642   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:23.434203   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:23.638456   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:23.638480   14247 retry.go:31] will retry after 4.646816814s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:23.679850   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:23.716069   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:23.717644   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:23.934140   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:24.098560   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:24.180556   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:24.217083   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:24.217666   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:24.434140   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:24.679219   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:24.716719   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:24.718193   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:24.933676   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:25.179613   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:25.216512   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:25.218154   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:25.433907   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:25.679885   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:25.716542   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:25.717935   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:25.933405   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:26.098935   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:26.179936   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:26.216563   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:26.218050   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:26.433737   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:26.679408   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:26.716881   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:26.717759   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:26.934370   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:27.179138   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:27.216752   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:27.217234   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:27.433911   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:27.679858   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:27.716129   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:27.717593   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:27.934144   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:28.180327   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:28.216903   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:28.217475   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:28.285654   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:28.434416   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:28.599675   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:28.679851   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:28.716433   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:28.717979   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 07:48:28.804753   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:28.804777   14247 retry.go:31] will retry after 6.113753708s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:28.934582   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:29.179439   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:29.216967   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:29.217779   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:29.434457   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:29.679363   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:29.716832   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:29.717530   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:29.933936   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:30.180622   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:30.217039   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:30.217628   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:30.434439   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:30.679457   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:30.717094   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:30.717562   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:30.934651   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:31.099358   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:31.179743   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:31.217213   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:31.217823   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:31.433563   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:31.679684   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:31.717297   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:31.717887   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:31.933529   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:32.179752   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:32.217188   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:32.217795   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:32.434389   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:32.678964   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:32.716598   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:32.717843   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:32.934221   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:33.179350   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:33.216846   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:33.217402   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:33.434165   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:33.598461   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:33.679958   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:33.716748   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:33.718191   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:33.933621   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:34.179327   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:34.216654   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:34.217300   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:34.434006   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:34.680480   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:34.717101   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:34.717658   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:34.918990   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:34.933741   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:35.179708   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:35.216463   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:35.218432   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:35.434328   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:35.444023   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:35.444050   14247 retry.go:31] will retry after 8.889779837s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:35.679924   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:35.716044   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:35.717495   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:35.934020   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:36.098649   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:36.179058   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:36.216728   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:36.218142   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:36.434129   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:36.679421   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:36.716954   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:36.717617   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:36.934072   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:37.179795   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:37.216222   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:37.217578   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:37.433992   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:37.679777   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:37.717451   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:37.718039   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:37.933449   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:38.099306   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:38.179798   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:38.216841   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:38.218345   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:38.433609   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:38.679518   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:38.717068   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:38.717673   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:38.934638   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:39.179558   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:39.216953   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:39.217793   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:39.434588   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:39.679771   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:39.717496   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:39.718061   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:39.933525   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:40.179542   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:40.217197   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:40.217805   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:40.434532   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:40.599125   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:40.679757   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:40.717499   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:40.717860   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:40.934381   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:41.179445   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:41.217050   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:41.217581   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:41.434386   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:41.679425   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:41.717187   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:41.717689   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:41.934212   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:42.179245   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:42.216811   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:42.217411   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:42.433844   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:42.599516   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:42.680030   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:42.716300   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:42.717781   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:42.933818   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:43.179983   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:43.216419   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:43.218161   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:43.434050   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:43.679775   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:43.716231   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:43.717705   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:43.934511   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:44.179734   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:44.217322   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:44.217854   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:44.333981   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:48:44.433495   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:44.679844   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:44.716666   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:44.718156   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1026 07:48:44.853470   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:44.853507   14247 retry.go:31] will retry after 25.607623623s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:48:44.933827   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:48:45.099394   14247 node_ready.go:57] node "addons-610291" has "Ready":"False" status (will retry)
	I1026 07:48:45.179751   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:45.216534   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:45.217974   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:45.434301   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:45.679055   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:45.716589   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:45.718055   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:45.933714   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:46.179558   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:46.217301   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:46.218209   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:46.433894   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:46.679749   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:46.717441   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:46.717805   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:46.934076   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:47.181237   14247 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1026 07:48:47.181275   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:47.216557   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:47.218528   14247 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1026 07:48:47.218547   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:47.434456   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:47.599874   14247 node_ready.go:49] node "addons-610291" is "Ready"
	I1026 07:48:47.599909   14247 node_ready.go:38] duration metric: took 42.003820542s for node "addons-610291" to be "Ready" ...
	I1026 07:48:47.599927   14247 api_server.go:52] waiting for apiserver process to appear ...
	I1026 07:48:47.599976   14247 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 07:48:47.619028   14247 api_server.go:72] duration metric: took 42.485263909s to wait for apiserver process to appear ...
	I1026 07:48:47.619071   14247 api_server.go:88] waiting for apiserver healthz status ...
	I1026 07:48:47.619095   14247 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1026 07:48:47.623883   14247 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1026 07:48:47.624839   14247 api_server.go:141] control plane version: v1.34.1
	I1026 07:48:47.624868   14247 api_server.go:131] duration metric: took 5.788922ms to wait for apiserver health ...
	I1026 07:48:47.624879   14247 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 07:48:47.629271   14247 system_pods.go:59] 20 kube-system pods found
	I1026 07:48:47.629338   14247 system_pods.go:61] "amd-gpu-device-plugin-79j4j" [3de0f744-f685-4002-a0fb-987b69a28eed] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 07:48:47.629355   14247 system_pods.go:61] "coredns-66bc5c9577-dqbbr" [a5360d6a-f7ac-49c8-a38b-de0cbc019ada] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 07:48:47.629366   14247 system_pods.go:61] "csi-hostpath-attacher-0" [427cd88d-7809-4d5c-b742-dc613723c8eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 07:48:47.629378   14247 system_pods.go:61] "csi-hostpath-resizer-0" [5632b492-535d-49fc-b4f4-780142412509] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 07:48:47.629390   14247 system_pods.go:61] "csi-hostpathplugin-nnl9n" [b19e7a2f-2826-4c12-9872-05c7b3daa41a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 07:48:47.629399   14247 system_pods.go:61] "etcd-addons-610291" [e218f6a9-b3e5-47a6-affc-0ced70bf0a2e] Running
	I1026 07:48:47.629405   14247 system_pods.go:61] "kindnet-b4jwg" [29fff50a-3d72-418d-8298-36d257dc9068] Running
	I1026 07:48:47.629414   14247 system_pods.go:61] "kube-apiserver-addons-610291" [9dcb8e97-6fe0-4cb1-9b62-d8193e9965f2] Running
	I1026 07:48:47.629419   14247 system_pods.go:61] "kube-controller-manager-addons-610291" [6e72e4d1-d1f5-45db-a473-17bee208af30] Running
	I1026 07:48:47.629430   14247 system_pods.go:61] "kube-ingress-dns-minikube" [16fe29e4-d3c1-404f-b1f5-d18bcec18f13] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 07:48:47.629436   14247 system_pods.go:61] "kube-proxy-mxqr8" [39564011-18e0-4076-9355-be6c38423d9e] Running
	I1026 07:48:47.629448   14247 system_pods.go:61] "kube-scheduler-addons-610291" [01bf8ae9-291c-4cd1-a1bd-c60d1e1b158e] Running
	I1026 07:48:47.629455   14247 system_pods.go:61] "metrics-server-85b7d694d7-fs7sf" [78b9ec71-29a1-4d28-979c-6a0735900428] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 07:48:47.629468   14247 system_pods.go:61] "nvidia-device-plugin-daemonset-9g5j7" [4b83bdea-b49d-4190-94d1-648aa449cddf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 07:48:47.629480   14247 system_pods.go:61] "registry-6b586f9694-9xvr4" [15f4eef6-d42e-43fa-8958-437758150119] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 07:48:47.629488   14247 system_pods.go:61] "registry-creds-764b6fb674-4mf5m" [5f373a48-52c9-441e-a2db-28351bc83a48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 07:48:47.629496   14247 system_pods.go:61] "registry-proxy-xgtqv" [5365db61-16ee-452b-9ccc-eaf42f532ce7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 07:48:47.629507   14247 system_pods.go:61] "snapshot-controller-7d9fbc56b8-klrbn" [f542d0aa-2574-4ee1-b4e7-f918488c019f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:47.629520   14247 system_pods.go:61] "snapshot-controller-7d9fbc56b8-qx7lp" [7e6af6b6-ad2b-4990-ab5b-aca4b8ac704e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:47.629528   14247 system_pods.go:61] "storage-provisioner" [e20648dc-41b5-404c-86ec-550b4b75c80a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 07:48:47.629542   14247 system_pods.go:74] duration metric: took 4.656727ms to wait for pod list to return data ...
	I1026 07:48:47.629552   14247 default_sa.go:34] waiting for default service account to be created ...
	I1026 07:48:47.631782   14247 default_sa.go:45] found service account: "default"
	I1026 07:48:47.631800   14247 default_sa.go:55] duration metric: took 2.241157ms for default service account to be created ...
	I1026 07:48:47.631810   14247 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 07:48:47.727663   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:47.727708   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:47.727871   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:47.728747   14247 system_pods.go:86] 20 kube-system pods found
	I1026 07:48:47.728771   14247 system_pods.go:89] "amd-gpu-device-plugin-79j4j" [3de0f744-f685-4002-a0fb-987b69a28eed] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 07:48:47.728778   14247 system_pods.go:89] "coredns-66bc5c9577-dqbbr" [a5360d6a-f7ac-49c8-a38b-de0cbc019ada] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 07:48:47.728785   14247 system_pods.go:89] "csi-hostpath-attacher-0" [427cd88d-7809-4d5c-b742-dc613723c8eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 07:48:47.728790   14247 system_pods.go:89] "csi-hostpath-resizer-0" [5632b492-535d-49fc-b4f4-780142412509] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 07:48:47.728796   14247 system_pods.go:89] "csi-hostpathplugin-nnl9n" [b19e7a2f-2826-4c12-9872-05c7b3daa41a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 07:48:47.728799   14247 system_pods.go:89] "etcd-addons-610291" [e218f6a9-b3e5-47a6-affc-0ced70bf0a2e] Running
	I1026 07:48:47.728804   14247 system_pods.go:89] "kindnet-b4jwg" [29fff50a-3d72-418d-8298-36d257dc9068] Running
	I1026 07:48:47.728808   14247 system_pods.go:89] "kube-apiserver-addons-610291" [9dcb8e97-6fe0-4cb1-9b62-d8193e9965f2] Running
	I1026 07:48:47.728811   14247 system_pods.go:89] "kube-controller-manager-addons-610291" [6e72e4d1-d1f5-45db-a473-17bee208af30] Running
	I1026 07:48:47.728818   14247 system_pods.go:89] "kube-ingress-dns-minikube" [16fe29e4-d3c1-404f-b1f5-d18bcec18f13] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 07:48:47.728821   14247 system_pods.go:89] "kube-proxy-mxqr8" [39564011-18e0-4076-9355-be6c38423d9e] Running
	I1026 07:48:47.728825   14247 system_pods.go:89] "kube-scheduler-addons-610291" [01bf8ae9-291c-4cd1-a1bd-c60d1e1b158e] Running
	I1026 07:48:47.728830   14247 system_pods.go:89] "metrics-server-85b7d694d7-fs7sf" [78b9ec71-29a1-4d28-979c-6a0735900428] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 07:48:47.728837   14247 system_pods.go:89] "nvidia-device-plugin-daemonset-9g5j7" [4b83bdea-b49d-4190-94d1-648aa449cddf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 07:48:47.728843   14247 system_pods.go:89] "registry-6b586f9694-9xvr4" [15f4eef6-d42e-43fa-8958-437758150119] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 07:48:47.728849   14247 system_pods.go:89] "registry-creds-764b6fb674-4mf5m" [5f373a48-52c9-441e-a2db-28351bc83a48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 07:48:47.728854   14247 system_pods.go:89] "registry-proxy-xgtqv" [5365db61-16ee-452b-9ccc-eaf42f532ce7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 07:48:47.728858   14247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-klrbn" [f542d0aa-2574-4ee1-b4e7-f918488c019f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:47.728867   14247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qx7lp" [7e6af6b6-ad2b-4990-ab5b-aca4b8ac704e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:47.728873   14247 system_pods.go:89] "storage-provisioner" [e20648dc-41b5-404c-86ec-550b4b75c80a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 07:48:47.728887   14247 retry.go:31] will retry after 282.680512ms: missing components: kube-dns
	I1026 07:48:47.935380   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:48.017085   14247 system_pods.go:86] 20 kube-system pods found
	I1026 07:48:48.017126   14247 system_pods.go:89] "amd-gpu-device-plugin-79j4j" [3de0f744-f685-4002-a0fb-987b69a28eed] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 07:48:48.017153   14247 system_pods.go:89] "coredns-66bc5c9577-dqbbr" [a5360d6a-f7ac-49c8-a38b-de0cbc019ada] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 07:48:48.017164   14247 system_pods.go:89] "csi-hostpath-attacher-0" [427cd88d-7809-4d5c-b742-dc613723c8eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 07:48:48.017173   14247 system_pods.go:89] "csi-hostpath-resizer-0" [5632b492-535d-49fc-b4f4-780142412509] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 07:48:48.017181   14247 system_pods.go:89] "csi-hostpathplugin-nnl9n" [b19e7a2f-2826-4c12-9872-05c7b3daa41a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 07:48:48.017186   14247 system_pods.go:89] "etcd-addons-610291" [e218f6a9-b3e5-47a6-affc-0ced70bf0a2e] Running
	I1026 07:48:48.017192   14247 system_pods.go:89] "kindnet-b4jwg" [29fff50a-3d72-418d-8298-36d257dc9068] Running
	I1026 07:48:48.017198   14247 system_pods.go:89] "kube-apiserver-addons-610291" [9dcb8e97-6fe0-4cb1-9b62-d8193e9965f2] Running
	I1026 07:48:48.017203   14247 system_pods.go:89] "kube-controller-manager-addons-610291" [6e72e4d1-d1f5-45db-a473-17bee208af30] Running
	I1026 07:48:48.017212   14247 system_pods.go:89] "kube-ingress-dns-minikube" [16fe29e4-d3c1-404f-b1f5-d18bcec18f13] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 07:48:48.017217   14247 system_pods.go:89] "kube-proxy-mxqr8" [39564011-18e0-4076-9355-be6c38423d9e] Running
	I1026 07:48:48.017223   14247 system_pods.go:89] "kube-scheduler-addons-610291" [01bf8ae9-291c-4cd1-a1bd-c60d1e1b158e] Running
	I1026 07:48:48.017230   14247 system_pods.go:89] "metrics-server-85b7d694d7-fs7sf" [78b9ec71-29a1-4d28-979c-6a0735900428] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 07:48:48.017264   14247 system_pods.go:89] "nvidia-device-plugin-daemonset-9g5j7" [4b83bdea-b49d-4190-94d1-648aa449cddf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 07:48:48.017273   14247 system_pods.go:89] "registry-6b586f9694-9xvr4" [15f4eef6-d42e-43fa-8958-437758150119] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 07:48:48.017283   14247 system_pods.go:89] "registry-creds-764b6fb674-4mf5m" [5f373a48-52c9-441e-a2db-28351bc83a48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 07:48:48.017291   14247 system_pods.go:89] "registry-proxy-xgtqv" [5365db61-16ee-452b-9ccc-eaf42f532ce7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 07:48:48.017302   14247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-klrbn" [f542d0aa-2574-4ee1-b4e7-f918488c019f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:48.017311   14247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qx7lp" [7e6af6b6-ad2b-4990-ab5b-aca4b8ac704e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:48.017319   14247 system_pods.go:89] "storage-provisioner" [e20648dc-41b5-404c-86ec-550b4b75c80a] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 07:48:48.017338   14247 retry.go:31] will retry after 344.079184ms: missing components: kube-dns
	I1026 07:48:48.181837   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:48.218573   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:48.218960   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:48.368866   14247 system_pods.go:86] 20 kube-system pods found
	I1026 07:48:48.368951   14247 system_pods.go:89] "amd-gpu-device-plugin-79j4j" [3de0f744-f685-4002-a0fb-987b69a28eed] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1026 07:48:48.368963   14247 system_pods.go:89] "coredns-66bc5c9577-dqbbr" [a5360d6a-f7ac-49c8-a38b-de0cbc019ada] Running
	I1026 07:48:48.368976   14247 system_pods.go:89] "csi-hostpath-attacher-0" [427cd88d-7809-4d5c-b742-dc613723c8eb] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1026 07:48:48.368985   14247 system_pods.go:89] "csi-hostpath-resizer-0" [5632b492-535d-49fc-b4f4-780142412509] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1026 07:48:48.368994   14247 system_pods.go:89] "csi-hostpathplugin-nnl9n" [b19e7a2f-2826-4c12-9872-05c7b3daa41a] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1026 07:48:48.369000   14247 system_pods.go:89] "etcd-addons-610291" [e218f6a9-b3e5-47a6-affc-0ced70bf0a2e] Running
	I1026 07:48:48.369006   14247 system_pods.go:89] "kindnet-b4jwg" [29fff50a-3d72-418d-8298-36d257dc9068] Running
	I1026 07:48:48.369011   14247 system_pods.go:89] "kube-apiserver-addons-610291" [9dcb8e97-6fe0-4cb1-9b62-d8193e9965f2] Running
	I1026 07:48:48.369018   14247 system_pods.go:89] "kube-controller-manager-addons-610291" [6e72e4d1-d1f5-45db-a473-17bee208af30] Running
	I1026 07:48:48.369025   14247 system_pods.go:89] "kube-ingress-dns-minikube" [16fe29e4-d3c1-404f-b1f5-d18bcec18f13] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1026 07:48:48.369030   14247 system_pods.go:89] "kube-proxy-mxqr8" [39564011-18e0-4076-9355-be6c38423d9e] Running
	I1026 07:48:48.369035   14247 system_pods.go:89] "kube-scheduler-addons-610291" [01bf8ae9-291c-4cd1-a1bd-c60d1e1b158e] Running
	I1026 07:48:48.369042   14247 system_pods.go:89] "metrics-server-85b7d694d7-fs7sf" [78b9ec71-29a1-4d28-979c-6a0735900428] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1026 07:48:48.369049   14247 system_pods.go:89] "nvidia-device-plugin-daemonset-9g5j7" [4b83bdea-b49d-4190-94d1-648aa449cddf] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1026 07:48:48.369059   14247 system_pods.go:89] "registry-6b586f9694-9xvr4" [15f4eef6-d42e-43fa-8958-437758150119] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1026 07:48:48.369089   14247 system_pods.go:89] "registry-creds-764b6fb674-4mf5m" [5f373a48-52c9-441e-a2db-28351bc83a48] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1026 07:48:48.369097   14247 system_pods.go:89] "registry-proxy-xgtqv" [5365db61-16ee-452b-9ccc-eaf42f532ce7] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1026 07:48:48.369107   14247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-klrbn" [f542d0aa-2574-4ee1-b4e7-f918488c019f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:48.369117   14247 system_pods.go:89] "snapshot-controller-7d9fbc56b8-qx7lp" [7e6af6b6-ad2b-4990-ab5b-aca4b8ac704e] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1026 07:48:48.369122   14247 system_pods.go:89] "storage-provisioner" [e20648dc-41b5-404c-86ec-550b4b75c80a] Running
	I1026 07:48:48.369133   14247 system_pods.go:126] duration metric: took 737.315625ms to wait for k8s-apps to be running ...
	I1026 07:48:48.369142   14247 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 07:48:48.369193   14247 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 07:48:48.434803   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:48.438474   14247 system_svc.go:56] duration metric: took 69.323041ms WaitForService to wait for kubelet
	I1026 07:48:48.438502   14247 kubeadm.go:586] duration metric: took 43.304744519s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 07:48:48.438523   14247 node_conditions.go:102] verifying NodePressure condition ...
	I1026 07:48:48.441941   14247 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 07:48:48.441963   14247 node_conditions.go:123] node cpu capacity is 8
	I1026 07:48:48.441975   14247 node_conditions.go:105] duration metric: took 3.447249ms to run NodePressure ...
	I1026 07:48:48.441987   14247 start.go:241] waiting for startup goroutines ...
	I1026 07:48:48.680150   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:48.717677   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:48.718826   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:48.934171   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:49.180353   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:49.281425   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:49.281633   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:49.434431   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:49.681312   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:49.717224   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:49.717624   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:49.934542   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:50.181352   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:50.217045   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:50.217560   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:50.434590   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:50.679721   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:50.717562   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:50.717925   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:50.935003   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:51.180108   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:51.216913   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:51.218435   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:51.434975   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:51.696659   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:51.717078   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:51.717765   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:51.934287   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:52.180111   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:52.216388   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:52.217893   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:52.434079   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:52.680662   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:52.716784   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:52.718451   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:52.934291   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:53.180164   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:53.280590   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:53.280791   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:53.434361   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:53.680668   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:53.716903   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:53.718458   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:53.934410   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:54.180454   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:54.252091   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:54.252210   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:54.433854   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:54.679781   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:54.717445   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:54.717915   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:54.933508   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:55.179452   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:55.217061   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:55.217665   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:55.436614   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:55.681369   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:55.782381   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:55.782404   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:55.934539   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:56.179911   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:56.217050   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:56.218547   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:56.434641   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:56.679726   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:56.717700   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:56.717988   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:56.934486   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:57.180818   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:57.219366   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:57.219409   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:57.434028   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:57.680230   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:57.716874   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:57.718575   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:57.934389   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:58.180148   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:58.217073   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:58.218533   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:58.434334   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:58.680689   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:58.717500   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:58.718968   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:58.935215   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:59.180943   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:59.217215   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:59.218444   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:48:59.434315   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:48:59.680495   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:48:59.717557   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:48:59.717934   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:00.005486   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:00.180156   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:00.217027   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:00.218516   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:00.443827   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:00.679947   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:00.717556   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:00.718013   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:00.933885   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:01.179933   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:01.216396   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:01.218224   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:01.434291   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:01.680055   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:01.716672   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:01.717995   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:01.933996   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:02.180670   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:02.218283   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:02.218599   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:02.434662   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:02.679967   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:02.780017   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:02.780095   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:02.934595   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:03.180801   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:03.219032   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:03.219236   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:03.435835   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:03.679981   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:03.716982   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:03.718413   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:03.934563   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:04.179864   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:04.219151   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:04.219572   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:04.434615   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:04.679939   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:04.716565   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:04.717992   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:04.933441   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:05.180353   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:05.217461   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:05.218053   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:05.433601   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:05.680141   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:05.718412   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:05.718550   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:05.935686   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:06.179743   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:06.216830   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:06.218740   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:06.435129   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:06.682216   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:06.719597   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:06.720711   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:06.935576   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:07.180716   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:07.217672   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:07.217960   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:07.434412   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:07.680640   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:07.717524   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:07.717944   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:07.934494   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:08.179965   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:08.216371   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:08.218086   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:08.433993   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:08.680341   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:08.717078   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:08.717556   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:08.934158   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:09.180125   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:09.217516   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:09.218488   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:09.435006   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:09.680717   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:09.717786   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:09.718384   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:09.955612   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:10.180799   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:10.217824   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:10.218802   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:10.433707   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:10.461803   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1026 07:49:10.680615   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:10.717244   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:10.718368   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:10.934425   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W1026 07:49:11.156908   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:11.156940   14247 retry.go:31] will retry after 44.795297433s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I1026 07:49:11.179847   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:11.217560   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:11.217796   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:11.435068   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:11.679958   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:11.717015   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:11.718039   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:11.934219   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:12.180724   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:12.221472   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:12.221498   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:12.434590   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:12.679622   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:12.717604   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:12.717916   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:12.934636   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:13.181019   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:13.217146   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:13.217763   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:13.434651   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:13.679439   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:13.716968   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:13.717597   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:13.934035   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:14.180161   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:14.217356   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:14.218551   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:14.435344   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:14.680472   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:14.717449   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:14.717915   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:14.933636   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:15.180369   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:15.217744   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:15.217781   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:15.434636   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:15.679945   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:15.717487   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:15.718195   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:15.934530   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:16.181377   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:16.219342   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:16.219475   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1026 07:49:16.435063   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:16.680592   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:16.720352   14247 kapi.go:107] duration metric: took 1m10.005110417s to wait for kubernetes.io/minikube-addons=registry ...
	I1026 07:49:16.720416   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:16.934132   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:17.200488   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:17.216781   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:17.502121   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:17.680396   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:17.717506   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:17.934022   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:18.180320   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:18.216924   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:18.435133   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:18.680497   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:18.717073   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:18.934151   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:19.261536   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:19.261560   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:19.434014   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:19.680395   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:19.717799   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:19.935452   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:20.180698   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:20.217725   14247 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1026 07:49:20.436444   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:20.680074   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:20.717747   14247 kapi.go:107] duration metric: took 1m14.004047934s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1026 07:49:20.934441   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:21.180808   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:21.434366   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:21.680495   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:21.934211   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:22.180724   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:22.434554   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:22.681048   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:22.933553   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:23.179307   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:23.433864   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:23.680179   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:23.933809   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1026 07:49:24.179928   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:24.434620   14247 kapi.go:107] duration metric: took 1m11.003636039s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1026 07:49:24.436293   14247 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-610291 cluster.
	I1026 07:49:24.437567   14247 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1026 07:49:24.438790   14247 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1026 07:49:24.681115   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:25.179645   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:25.680015   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:26.180606   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:26.712294   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:27.180095   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:27.679380   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:28.180159   14247 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1026 07:49:28.679912   14247 kapi.go:107] duration metric: took 1m21.503309039s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1026 07:49:55.953323   14247 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W1026 07:49:56.486627   14247 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W1026 07:49:56.486718   14247 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1026 07:49:56.488759   14247 out.go:179] * Enabled addons: nvidia-device-plugin, amd-gpu-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, metrics-server, registry-creds, yakd, storage-provisioner-rancher, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1026 07:49:56.490189   14247 addons.go:514] duration metric: took 1m51.356512789s for enable addons: enabled=[nvidia-device-plugin amd-gpu-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass metrics-server registry-creds yakd storage-provisioner-rancher volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1026 07:49:56.490224   14247 start.go:246] waiting for cluster config update ...
	I1026 07:49:56.490241   14247 start.go:255] writing updated cluster config ...
	I1026 07:49:56.490480   14247 ssh_runner.go:195] Run: rm -f paused
	I1026 07:49:56.494321   14247 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 07:49:56.497814   14247 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dqbbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.501582   14247 pod_ready.go:94] pod "coredns-66bc5c9577-dqbbr" is "Ready"
	I1026 07:49:56.501604   14247 pod_ready.go:86] duration metric: took 3.77084ms for pod "coredns-66bc5c9577-dqbbr" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.503423   14247 pod_ready.go:83] waiting for pod "etcd-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.506548   14247 pod_ready.go:94] pod "etcd-addons-610291" is "Ready"
	I1026 07:49:56.506567   14247 pod_ready.go:86] duration metric: took 3.126562ms for pod "etcd-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.508302   14247 pod_ready.go:83] waiting for pod "kube-apiserver-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.511529   14247 pod_ready.go:94] pod "kube-apiserver-addons-610291" is "Ready"
	I1026 07:49:56.511549   14247 pod_ready.go:86] duration metric: took 3.228239ms for pod "kube-apiserver-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.513102   14247 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:56.898347   14247 pod_ready.go:94] pod "kube-controller-manager-addons-610291" is "Ready"
	I1026 07:49:56.898371   14247 pod_ready.go:86] duration metric: took 385.251705ms for pod "kube-controller-manager-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:57.098537   14247 pod_ready.go:83] waiting for pod "kube-proxy-mxqr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:57.498360   14247 pod_ready.go:94] pod "kube-proxy-mxqr8" is "Ready"
	I1026 07:49:57.498386   14247 pod_ready.go:86] duration metric: took 399.825144ms for pod "kube-proxy-mxqr8" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:57.698339   14247 pod_ready.go:83] waiting for pod "kube-scheduler-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:58.097699   14247 pod_ready.go:94] pod "kube-scheduler-addons-610291" is "Ready"
	I1026 07:49:58.097724   14247 pod_ready.go:86] duration metric: took 399.362741ms for pod "kube-scheduler-addons-610291" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 07:49:58.097735   14247 pod_ready.go:40] duration metric: took 1.603386693s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 07:49:58.139679   14247 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 07:49:58.141742   14247 out.go:179] * Done! kubectl is now configured to use "addons-610291" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 07:49:58 addons-610291 crio[781]: time="2025-10-26T07:49:58.989900094Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 07:49:59 addons-610291 crio[781]: time="2025-10-26T07:49:59.986965402Z" level=info msg="Removing container: 7b81990fa2f8c24b3f221d4b516493b726453ffe6f1ed4012acef9d7940268a7" id=53a6c170-e7ab-4386-8ab5-3dca316c2da1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 07:49:59 addons-610291 crio[781]: time="2025-10-26T07:49:59.992949867Z" level=info msg="Removed container 7b81990fa2f8c24b3f221d4b516493b726453ffe6f1ed4012acef9d7940268a7: gcp-auth/gcp-auth-certs-patch-4jk2s/patch" id=53a6c170-e7ab-4386-8ab5-3dca316c2da1 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 07:49:59 addons-610291 crio[781]: time="2025-10-26T07:49:59.99448979Z" level=info msg="Removing container: a5eac72e20285b4c5c9fc0c420df67b0222b8a47a64b12a0e473aeea056d62b5" id=9cfd7964-e3c6-4ed0-84d2-4feda1a0cf38 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.001672865Z" level=info msg="Removed container a5eac72e20285b4c5c9fc0c420df67b0222b8a47a64b12a0e473aeea056d62b5: gcp-auth/gcp-auth-certs-create-4pk42/create" id=9cfd7964-e3c6-4ed0-84d2-4feda1a0cf38 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.008067959Z" level=info msg="Stopping pod sandbox: a065ac20bd6432af477258b81bb983e3a60805339755bdf722aae5f5afffbc95" id=6476fb3d-b6f0-4bc8-97a5-b881dbdf95ba name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.008109477Z" level=info msg="Stopped pod sandbox (already stopped): a065ac20bd6432af477258b81bb983e3a60805339755bdf722aae5f5afffbc95" id=6476fb3d-b6f0-4bc8-97a5-b881dbdf95ba name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.008488702Z" level=info msg="Removing pod sandbox: a065ac20bd6432af477258b81bb983e3a60805339755bdf722aae5f5afffbc95" id=77fa8d7e-4e9e-4154-aef9-425fddb7ece3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.011286739Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.011337851Z" level=info msg="Removed pod sandbox: a065ac20bd6432af477258b81bb983e3a60805339755bdf722aae5f5afffbc95" id=77fa8d7e-4e9e-4154-aef9-425fddb7ece3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.011724745Z" level=info msg="Stopping pod sandbox: 2a1a3dd16e0be612a7c652b472ccd82016dc96f1a08a6372ae2d04c6d7da2fff" id=8c892332-c833-412c-bb03-5ffcb6133c6f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.011766791Z" level=info msg="Stopped pod sandbox (already stopped): 2a1a3dd16e0be612a7c652b472ccd82016dc96f1a08a6372ae2d04c6d7da2fff" id=8c892332-c833-412c-bb03-5ffcb6133c6f name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.012089596Z" level=info msg="Removing pod sandbox: 2a1a3dd16e0be612a7c652b472ccd82016dc96f1a08a6372ae2d04c6d7da2fff" id=04d822f6-8617-4dfe-80bc-eaa7ae1aff0e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.014793503Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.014840009Z" level=info msg="Removed pod sandbox: 2a1a3dd16e0be612a7c652b472ccd82016dc96f1a08a6372ae2d04c6d7da2fff" id=04d822f6-8617-4dfe-80bc-eaa7ae1aff0e name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.245416371Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=f7d55281-5fc9-437e-bfc4-fd7c741d778c name=/runtime.v1.ImageService/PullImage
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.24600735Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0fe9b7d2-0e53-4400-830e-f3eae27c388e name=/runtime.v1.ImageService/ImageStatus
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.247375013Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5415f37e-16ca-4fce-8287-f4b3e5d6080a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.250867207Z" level=info msg="Creating container: default/busybox/busybox" id=8d3b1313-82c4-4f31-a6a0-96948b2c7f8a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.250966621Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.25610689Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.25654451Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.284950279Z" level=info msg="Created container bfc678b107c17a02d2c7d1b42a8d2f005d824d1601efd7cbd292bf41d468a0f4: default/busybox/busybox" id=8d3b1313-82c4-4f31-a6a0-96948b2c7f8a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.285541944Z" level=info msg="Starting container: bfc678b107c17a02d2c7d1b42a8d2f005d824d1601efd7cbd292bf41d468a0f4" id=b67db60f-396e-4a77-8cbe-8084ba403889 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 07:50:00 addons-610291 crio[781]: time="2025-10-26T07:50:00.2872638Z" level=info msg="Started container" PID=6559 containerID=bfc678b107c17a02d2c7d1b42a8d2f005d824d1601efd7cbd292bf41d468a0f4 description=default/busybox/busybox id=b67db60f-396e-4a77-8cbe-8084ba403889 name=/runtime.v1.RuntimeService/StartContainer sandboxID=81279115937c0d401d4fe73417e053d7323cf5bc6e3a6e9134c0ec449a22859d
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	bfc678b107c17       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   81279115937c0       busybox                                     default
	ff1e9c088f2c4       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          40 seconds ago       Running             csi-snapshotter                          0                   bc3ba90d25ef2       csi-hostpathplugin-nnl9n                    kube-system
	ae9bf07bc6fbd       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          41 seconds ago       Running             csi-provisioner                          0                   bc3ba90d25ef2       csi-hostpathplugin-nnl9n                    kube-system
	790b75b33837b       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            42 seconds ago       Running             liveness-probe                           0                   bc3ba90d25ef2       csi-hostpathplugin-nnl9n                    kube-system
	6fbcc38580336       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           43 seconds ago       Running             hostpath                                 0                   bc3ba90d25ef2       csi-hostpathplugin-nnl9n                    kube-system
	9abeb3423d9e6       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 44 seconds ago       Running             gcp-auth                                 0                   22dc5993fe473       gcp-auth-78565c9fb4-n2jng                   gcp-auth
	b52991ad016eb       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                46 seconds ago       Running             node-driver-registrar                    0                   bc3ba90d25ef2       csi-hostpathplugin-nnl9n                    kube-system
	b248cc878fb62       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:db9cb3dd78ffab71eb8746afcb57bd3859993cb150a76d8b7cebe79441c702cb                            46 seconds ago       Running             gadget                                   0                   238142f6a4e0b       gadget-qvptl                                gadget
	85c0a4904ad2b       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             48 seconds ago       Running             controller                               0                   88a64fef92686       ingress-nginx-controller-675c5ddd98-s4j4n   ingress-nginx
	d22cb2ff56e59       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              52 seconds ago       Running             registry-proxy                           0                   9c72000c06e64       registry-proxy-xgtqv                        kube-system
	9787a3b2f3c6f       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   55 seconds ago       Running             csi-external-health-monitor-controller   0                   bc3ba90d25ef2       csi-hostpathplugin-nnl9n                    kube-system
	48d279f0283b0       nvcr.io/nvidia/k8s-device-plugin@sha256:ad155f1089b64673c75b2f39258f0791cbad6d3011419726ec605196981e1c32                                     56 seconds ago       Running             nvidia-device-plugin-ctr                 0                   d40e313f87cfa       nvidia-device-plugin-daemonset-9g5j7        kube-system
	4bec0c61b08ba       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              About a minute ago   Running             csi-resizer                              0                   91b694ebbaf6f       csi-hostpath-resizer-0                      kube-system
	a5104f945769b       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   51e59d87fe1db       snapshot-controller-7d9fbc56b8-klrbn        kube-system
	d0946ef457f29       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     About a minute ago   Running             amd-gpu-device-plugin                    0                   7d9a488fa4735       amd-gpu-device-plugin-79j4j                 kube-system
	d78d517513eb7       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             About a minute ago   Running             csi-attacher                             0                   83565fd2ba0e9       csi-hostpath-attacher-0                     kube-system
	9606d1f8109db       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      About a minute ago   Running             volume-snapshot-controller               0                   1c97ac738dbda       snapshot-controller-7d9fbc56b8-qx7lp        kube-system
	755f6bc76edf9       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              patch                                    0                   0fd296dfb7e36       ingress-nginx-admission-patch-d6z8s         ingress-nginx
	4f4894d9a3135       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              About a minute ago   Running             yakd                                     0                   b6220cb94f3a8       yakd-dashboard-5ff678cb9-9mp2x              yakd-dashboard
	fbe791bbcaa55       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             About a minute ago   Running             local-path-provisioner                   0                   8cca98cb5edc5       local-path-provisioner-648f6765c9-kr5wg     local-path-storage
	bec0f3b559fbe       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   About a minute ago   Exited              create                                   0                   b3c1ef5df582d       ingress-nginx-admission-create-pdppz        ingress-nginx
	060d78eaf3f37       gcr.io/cloud-spanner-emulator/emulator@sha256:66030f526b1bc41f0d2027b496fd8fa53f620bf9d5a18baa07990e67f1a20237                               About a minute ago   Running             cloud-spanner-emulator                   0                   a134960725cfa       cloud-spanner-emulator-86bd5cbb97-h4cbz     default
	d0e6d85b2ec86       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           About a minute ago   Running             registry                                 0                   c8c833aee7f0f       registry-6b586f9694-9xvr4                   kube-system
	604c3b50083ea       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        About a minute ago   Running             metrics-server                           0                   51fda9c9e446b       metrics-server-85b7d694d7-fs7sf             kube-system
	2749efc7ce147       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               About a minute ago   Running             minikube-ingress-dns                     0                   48d415ccfa3b4       kube-ingress-dns-minikube                   kube-system
	0d587f45003f4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             About a minute ago   Running             coredns                                  0                   9fc8185235027       coredns-66bc5c9577-dqbbr                    kube-system
	3504b65df25d5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   9e45a9664cdd4       storage-provisioner                         kube-system
	e6e97259a969c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             2 minutes ago        Running             kube-proxy                               0                   ed953d165bbf5       kube-proxy-mxqr8                            kube-system
	4c0deee84eddb       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             2 minutes ago        Running             kindnet-cni                              0                   ba1bd3062a478       kindnet-b4jwg                               kube-system
	aa644f5a3e4c4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             2 minutes ago        Running             etcd                                     0                   6c0e5aac756c9       etcd-addons-610291                          kube-system
	2190af960ec64       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             2 minutes ago        Running             kube-apiserver                           0                   29475dcb88eb3       kube-apiserver-addons-610291                kube-system
	f9726db7b5e96       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             2 minutes ago        Running             kube-scheduler                           0                   ae8c57e37d754       kube-scheduler-addons-610291                kube-system
	a92d6c36860a8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             2 minutes ago        Running             kube-controller-manager                  0                   5d0a6499d9ba8       kube-controller-manager-addons-610291       kube-system
	
	
	==> coredns [0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5] <==
	[INFO] 10.244.0.19:58982 - 47894 "AAAA IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003327748s
	[INFO] 10.244.0.19:50012 - 42784 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000083955s
	[INFO] 10.244.0.19:50012 - 42431 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000057816s
	[INFO] 10.244.0.19:50268 - 23669 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000046003s
	[INFO] 10.244.0.19:50268 - 23940 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00007966s
	[INFO] 10.244.0.19:53600 - 7234 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000076532s
	[INFO] 10.244.0.19:53600 - 6919 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000124226s
	[INFO] 10.244.0.19:58392 - 2130 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000127665s
	[INFO] 10.244.0.19:58392 - 1866 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000153057s
	[INFO] 10.244.0.22:40487 - 58804 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000203643s
	[INFO] 10.244.0.22:33153 - 38193 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.00026124s
	[INFO] 10.244.0.22:32770 - 49293 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000100537s
	[INFO] 10.244.0.22:35645 - 11514 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000123674s
	[INFO] 10.244.0.22:43440 - 6194 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000096116s
	[INFO] 10.244.0.22:48088 - 18203 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124845s
	[INFO] 10.244.0.22:33126 - 2841 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.003959968s
	[INFO] 10.244.0.22:52881 - 64115 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.004550146s
	[INFO] 10.244.0.22:36973 - 32810 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007546328s
	[INFO] 10.244.0.22:49558 - 47935 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007688116s
	[INFO] 10.244.0.22:50802 - 5373 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004682366s
	[INFO] 10.244.0.22:48157 - 16433 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006144026s
	[INFO] 10.244.0.22:57088 - 56177 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004272234s
	[INFO] 10.244.0.22:60685 - 26338 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.00524819s
	[INFO] 10.244.0.22:58451 - 48109 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001022634s
	[INFO] 10.244.0.22:33747 - 59035 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.002167279s
	
	
	==> describe nodes <==
	Name:               addons-610291
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-610291
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=addons-610291
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T07_48_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-610291
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-610291"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 07:47:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-610291
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 07:50:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 07:50:02 +0000   Sun, 26 Oct 2025 07:47:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 07:50:02 +0000   Sun, 26 Oct 2025 07:47:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 07:50:02 +0000   Sun, 26 Oct 2025 07:47:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 07:50:02 +0000   Sun, 26 Oct 2025 07:48:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-610291
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                4788153a-655f-4b2c-a534-38625b1e2dd6
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-86bd5cbb97-h4cbz      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  gadget                      gadget-qvptl                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  gcp-auth                    gcp-auth-78565c9fb4-n2jng                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         115s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-s4j4n    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         2m2s
	  kube-system                 amd-gpu-device-plugin-79j4j                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 coredns-66bc5c9577-dqbbr                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m3s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 csi-hostpathplugin-nnl9n                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 etcd-addons-610291                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m8s
	  kube-system                 kindnet-b4jwg                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-addons-610291                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-controller-manager-addons-610291        200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 kube-proxy-mxqr8                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-addons-610291                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 metrics-server-85b7d694d7-fs7sf              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         2m2s
	  kube-system                 nvidia-device-plugin-daemonset-9g5j7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 registry-6b586f9694-9xvr4                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-creds-764b6fb674-4mf5m              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  kube-system                 registry-proxy-xgtqv                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         81s
	  kube-system                 snapshot-controller-7d9fbc56b8-klrbn         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 snapshot-controller-7d9fbc56b8-qx7lp         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  local-path-storage          local-path-provisioner-648f6765c9-kr5wg      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m2s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-9mp2x               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     2m2s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 2m1s  kube-proxy       
	  Normal  Starting                 2m9s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m8s  kubelet          Node addons-610291 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m8s  kubelet          Node addons-610291 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m8s  kubelet          Node addons-610291 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m4s  node-controller  Node addons-610291 event: Registered Node addons-610291 in Controller
	  Normal  NodeReady                81s   kubelet          Node addons-610291 status is now: NodeReady
	
	
	==> dmesg <==
	[Oct26 07:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001869] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.000999] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.085013] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.395692] i8042: Warning: Keylock active
	[  +0.011460] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.495516] block sda: the capability attribute has been deprecated.
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce] <==
	{"level":"warn","ts":"2025-10-26T07:47:57.228608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.234507Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.241182Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.247018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.252954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59572","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.259650Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.266539Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.273116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.279298Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59654","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.285854Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59672","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.292421Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.311411Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.319052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.326022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:47:57.379093Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:48:07.582990Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:48:07.594310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:48:34.782616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60380","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:48:34.799216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:48:34.811183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:48:34.817366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:49:06.143183Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.971053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-10-26T07:49:06.143329Z","caller":"traceutil/trace.go:172","msg":"trace[160218583] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1083; }","duration":"115.131212ms","start":"2025-10-26T07:49:06.028180Z","end":"2025-10-26T07:49:06.143312Z","steps":["trace[160218583] 'range keys from in-memory index tree'  (duration: 114.906398ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T07:49:17.198809Z","caller":"traceutil/trace.go:172","msg":"trace[881903939] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"114.711873ms","start":"2025-10-26T07:49:17.084074Z","end":"2025-10-26T07:49:17.198786Z","steps":["trace[881903939] 'process raft request'  (duration: 70.579972ms)","trace[881903939] 'compare'  (duration: 43.982834ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T07:49:32.452853Z","caller":"traceutil/trace.go:172","msg":"trace[2012589528] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"134.42988ms","start":"2025-10-26T07:49:32.318409Z","end":"2025-10-26T07:49:32.452839Z","steps":["trace[2012589528] 'process raft request'  (duration: 134.341572ms)"],"step_count":1}
	
	
	==> gcp-auth [9abeb3423d9e6c097de96ad32dc682ea966fb5047da015c6c3fbfa7e44fd8c46] <==
	2025/10/26 07:49:24 GCP Auth Webhook started!
	2025/10/26 07:49:58 Ready to marshal response ...
	2025/10/26 07:49:58 Ready to write response ...
	2025/10/26 07:49:58 Ready to marshal response ...
	2025/10/26 07:49:58 Ready to write response ...
	2025/10/26 07:49:58 Ready to marshal response ...
	2025/10/26 07:49:58 Ready to write response ...
	
	
	==> kernel <==
	 07:50:08 up 32 min,  0 user,  load average: 1.45, 1.22, 0.52
	Linux addons-610291 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c] <==
	E1026 07:48:36.749103       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1026 07:48:36.749123       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1026 07:48:36.749103       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1026 07:48:36.749186       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1026 07:48:38.349140       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 07:48:38.349167       1 metrics.go:72] Registering metrics
	I1026 07:48:38.349235       1 controller.go:711] "Syncing nftables rules"
	I1026 07:48:46.755334       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:48:46.755392       1 main.go:301] handling current node
	I1026 07:48:56.748311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:48:56.748348       1 main.go:301] handling current node
	I1026 07:49:06.748293       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:49:06.748606       1 main.go:301] handling current node
	I1026 07:49:16.749004       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:49:16.749048       1 main.go:301] handling current node
	I1026 07:49:26.748831       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:49:26.748865       1 main.go:301] handling current node
	I1026 07:49:36.748933       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:49:36.748964       1 main.go:301] handling current node
	I1026 07:49:46.751311       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:49:46.751339       1 main.go:301] handling current node
	I1026 07:49:56.749032       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:49:56.749058       1 main.go:301] handling current node
	I1026 07:50:06.751368       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:50:06.751410       1 main.go:301] handling current node
	
	
	==> kube-apiserver [2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a] <==
	W1026 07:48:54.260887       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 07:48:54.260882       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.260956       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1026 07:48:54.261268       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.267134       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.288414       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.329957       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.410890       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.572190       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	E1026 07:48:54.892696       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.18.53:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.18.53:443: connect: connection refused" logger="UnhandledError"
	W1026 07:48:55.263142       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 07:48:55.263194       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1026 07:48:55.263206       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1026 07:48:55.263147       1 handler_proxy.go:99] no RequestInfo found in the context
	E1026 07:48:55.263322       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1026 07:48:55.264475       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1026 07:48:55.568456       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1026 07:50:06.806295       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55486: use of closed network connection
	E1026 07:50:06.954201       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:55512: use of closed network connection
	
	
	==> kube-controller-manager [a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83] <==
	I1026 07:48:04.760424       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 07:48:04.760448       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 07:48:04.760453       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 07:48:04.760479       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 07:48:04.760541       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 07:48:04.761971       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 07:48:04.762050       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 07:48:04.762731       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 07:48:04.762795       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 07:48:04.762848       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 07:48:04.762855       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 07:48:04.762863       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 07:48:04.768901       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="addons-610291" podCIDRs=["10.244.0.0/24"]
	I1026 07:48:04.769983       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 07:48:04.780540       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 07:48:04.781761       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1026 07:48:06.492228       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1026 07:48:34.774498       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1026 07:48:34.774637       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1026 07:48:34.774672       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1026 07:48:34.794041       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1026 07:48:34.798726       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1026 07:48:34.875021       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 07:48:34.899728       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 07:48:49.715463       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5] <==
	I1026 07:48:06.414479       1 server_linux.go:53] "Using iptables proxy"
	I1026 07:48:06.505064       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 07:48:06.606278       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 07:48:06.613829       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 07:48:06.613924       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 07:48:06.642660       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 07:48:06.642780       1 server_linux.go:132] "Using iptables Proxier"
	I1026 07:48:06.649323       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 07:48:06.654708       1 server.go:527] "Version info" version="v1.34.1"
	I1026 07:48:06.654738       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 07:48:06.656669       1 config.go:309] "Starting node config controller"
	I1026 07:48:06.656688       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 07:48:06.656702       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 07:48:06.656764       1 config.go:200] "Starting service config controller"
	I1026 07:48:06.656783       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 07:48:06.656801       1 config.go:106] "Starting endpoint slice config controller"
	I1026 07:48:06.656806       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 07:48:06.656820       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 07:48:06.656825       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 07:48:06.756884       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 07:48:06.756884       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 07:48:06.756999       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3] <==
	E1026 07:47:57.773317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 07:47:57.773415       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 07:47:57.774003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 07:47:57.774089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 07:47:57.773992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 07:47:57.774309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 07:47:57.774315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 07:47:57.774362       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 07:47:57.774363       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 07:47:57.774377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 07:47:57.774377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 07:47:57.774512       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 07:47:57.774816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 07:47:57.774838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 07:47:58.631114       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 07:47:58.631117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 07:47:58.678854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 07:47:58.717442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 07:47:58.762486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 07:47:58.771056       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 07:47:58.971665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 07:47:58.975770       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 07:47:58.999838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 07:47:59.008693       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1026 07:48:01.469527       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 07:49:16 addons-610291 kubelet[1304]: I1026 07:49:16.266977    1304 scope.go:117] "RemoveContainer" containerID="5a8a38ba7c4543907893684f9e4b3d4755ec127087dce6391b3705e52e0761b4"
	Oct 26 07:49:16 addons-610291 kubelet[1304]: I1026 07:49:16.269876    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xgtqv" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:49:17 addons-610291 kubelet[1304]: I1026 07:49:17.274286    1304 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-xgtqv" secret="" err="secret \"gcp-auth\" not found"
	Oct 26 07:49:17 addons-610291 kubelet[1304]: I1026 07:49:17.324756    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-xgtqv" podStartSLOduration=2.4435620719999998 podStartE2EDuration="30.324732802s" podCreationTimestamp="2025-10-26 07:48:47 +0000 UTC" firstStartedPulling="2025-10-26 07:48:47.623339709 +0000 UTC m=+47.707687430" lastFinishedPulling="2025-10-26 07:49:15.504510446 +0000 UTC m=+75.588858160" observedRunningTime="2025-10-26 07:49:16.296516935 +0000 UTC m=+76.380864659" watchObservedRunningTime="2025-10-26 07:49:17.324732802 +0000 UTC m=+77.409080526"
	Oct 26 07:49:17 addons-610291 kubelet[1304]: I1026 07:49:17.607124    1304 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckj2w\" (UniqueName: \"kubernetes.io/projected/4e3d532f-42a7-4442-a0b5-62fff185e699-kube-api-access-ckj2w\") pod \"4e3d532f-42a7-4442-a0b5-62fff185e699\" (UID: \"4e3d532f-42a7-4442-a0b5-62fff185e699\") "
	Oct 26 07:49:17 addons-610291 kubelet[1304]: I1026 07:49:17.609607    1304 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e3d532f-42a7-4442-a0b5-62fff185e699-kube-api-access-ckj2w" (OuterVolumeSpecName: "kube-api-access-ckj2w") pod "4e3d532f-42a7-4442-a0b5-62fff185e699" (UID: "4e3d532f-42a7-4442-a0b5-62fff185e699"). InnerVolumeSpecName "kube-api-access-ckj2w". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Oct 26 07:49:17 addons-610291 kubelet[1304]: I1026 07:49:17.708304    1304 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ckj2w\" (UniqueName: \"kubernetes.io/projected/4e3d532f-42a7-4442-a0b5-62fff185e699-kube-api-access-ckj2w\") on node \"addons-610291\" DevicePath \"\""
	Oct 26 07:49:18 addons-610291 kubelet[1304]: I1026 07:49:18.279629    1304 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a1a3dd16e0be612a7c652b472ccd82016dc96f1a08a6372ae2d04c6d7da2fff"
	Oct 26 07:49:19 addons-610291 kubelet[1304]: E1026 07:49:19.016761    1304 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 26 07:49:19 addons-610291 kubelet[1304]: E1026 07:49:19.016876    1304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f373a48-52c9-441e-a2db-28351bc83a48-gcr-creds podName:5f373a48-52c9-441e-a2db-28351bc83a48 nodeName:}" failed. No retries permitted until 2025-10-26 07:49:51.016854213 +0000 UTC m=+111.101201937 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/5f373a48-52c9-441e-a2db-28351bc83a48-gcr-creds") pod "registry-creds-764b6fb674-4mf5m" (UID: "5f373a48-52c9-441e-a2db-28351bc83a48") : secret "registry-creds-gcr" not found
	Oct 26 07:49:22 addons-610291 kubelet[1304]: I1026 07:49:22.311967    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-qvptl" podStartSLOduration=66.379243472 podStartE2EDuration="1m16.311947273s" podCreationTimestamp="2025-10-26 07:48:06 +0000 UTC" firstStartedPulling="2025-10-26 07:49:11.758059312 +0000 UTC m=+71.842407015" lastFinishedPulling="2025-10-26 07:49:21.690763089 +0000 UTC m=+81.775110816" observedRunningTime="2025-10-26 07:49:22.311524476 +0000 UTC m=+82.395872200" watchObservedRunningTime="2025-10-26 07:49:22.311947273 +0000 UTC m=+82.396294997"
	Oct 26 07:49:22 addons-610291 kubelet[1304]: I1026 07:49:22.312293    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-s4j4n" podStartSLOduration=60.037736481 podStartE2EDuration="1m16.312279849s" podCreationTimestamp="2025-10-26 07:48:06 +0000 UTC" firstStartedPulling="2025-10-26 07:49:03.262107322 +0000 UTC m=+63.346455029" lastFinishedPulling="2025-10-26 07:49:19.536650688 +0000 UTC m=+79.620998397" observedRunningTime="2025-10-26 07:49:20.303171244 +0000 UTC m=+80.387518972" watchObservedRunningTime="2025-10-26 07:49:22.312279849 +0000 UTC m=+82.396627573"
	Oct 26 07:49:24 addons-610291 kubelet[1304]: I1026 07:49:24.317394    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-n2jng" podStartSLOduration=66.644019186 podStartE2EDuration="1m11.317375764s" podCreationTimestamp="2025-10-26 07:48:13 +0000 UTC" firstStartedPulling="2025-10-26 07:49:19.491379584 +0000 UTC m=+79.575727293" lastFinishedPulling="2025-10-26 07:49:24.164736168 +0000 UTC m=+84.249083871" observedRunningTime="2025-10-26 07:49:24.317043365 +0000 UTC m=+84.401391089" watchObservedRunningTime="2025-10-26 07:49:24.317375764 +0000 UTC m=+84.401723492"
	Oct 26 07:49:26 addons-610291 kubelet[1304]: I1026 07:49:26.048996    1304 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Oct 26 07:49:26 addons-610291 kubelet[1304]: I1026 07:49:26.049038    1304 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Oct 26 07:49:28 addons-610291 kubelet[1304]: I1026 07:49:28.351032    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-nnl9n" podStartSLOduration=1.258537947 podStartE2EDuration="41.35100798s" podCreationTimestamp="2025-10-26 07:48:47 +0000 UTC" firstStartedPulling="2025-10-26 07:48:47.600703877 +0000 UTC m=+47.685051580" lastFinishedPulling="2025-10-26 07:49:27.693173897 +0000 UTC m=+87.777521613" observedRunningTime="2025-10-26 07:49:28.349186416 +0000 UTC m=+88.433534139" watchObservedRunningTime="2025-10-26 07:49:28.35100798 +0000 UTC m=+88.435355706"
	Oct 26 07:49:37 addons-610291 kubelet[1304]: I1026 07:49:37.999312    1304 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="722d01b9-482f-4d60-ba91-6286b5680e46" path="/var/lib/kubelet/pods/722d01b9-482f-4d60-ba91-6286b5680e46/volumes"
	Oct 26 07:49:49 addons-610291 kubelet[1304]: I1026 07:49:49.999501    1304 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e3d532f-42a7-4442-a0b5-62fff185e699" path="/var/lib/kubelet/pods/4e3d532f-42a7-4442-a0b5-62fff185e699/volumes"
	Oct 26 07:49:51 addons-610291 kubelet[1304]: E1026 07:49:51.055407    1304 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Oct 26 07:49:51 addons-610291 kubelet[1304]: E1026 07:49:51.055494    1304 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5f373a48-52c9-441e-a2db-28351bc83a48-gcr-creds podName:5f373a48-52c9-441e-a2db-28351bc83a48 nodeName:}" failed. No retries permitted until 2025-10-26 07:50:55.055475543 +0000 UTC m=+175.139823264 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/5f373a48-52c9-441e-a2db-28351bc83a48-gcr-creds") pod "registry-creds-764b6fb674-4mf5m" (UID: "5f373a48-52c9-441e-a2db-28351bc83a48") : secret "registry-creds-gcr" not found
	Oct 26 07:49:58 addons-610291 kubelet[1304]: I1026 07:49:58.709537    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rsgg8\" (UniqueName: \"kubernetes.io/projected/1807b2d1-eb55-43a0-bcf7-e56cbd0c5cbc-kube-api-access-rsgg8\") pod \"busybox\" (UID: \"1807b2d1-eb55-43a0-bcf7-e56cbd0c5cbc\") " pod="default/busybox"
	Oct 26 07:49:58 addons-610291 kubelet[1304]: I1026 07:49:58.709600    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/1807b2d1-eb55-43a0-bcf7-e56cbd0c5cbc-gcp-creds\") pod \"busybox\" (UID: \"1807b2d1-eb55-43a0-bcf7-e56cbd0c5cbc\") " pod="default/busybox"
	Oct 26 07:49:59 addons-610291 kubelet[1304]: I1026 07:49:59.985768    1304 scope.go:117] "RemoveContainer" containerID="7b81990fa2f8c24b3f221d4b516493b726453ffe6f1ed4012acef9d7940268a7"
	Oct 26 07:49:59 addons-610291 kubelet[1304]: I1026 07:49:59.993201    1304 scope.go:117] "RemoveContainer" containerID="a5eac72e20285b4c5c9fc0c420df67b0222b8a47a64b12a0e473aeea056d62b5"
	Oct 26 07:50:00 addons-610291 kubelet[1304]: I1026 07:50:00.466167    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.20734527 podStartE2EDuration="2.466147656s" podCreationTimestamp="2025-10-26 07:49:58 +0000 UTC" firstStartedPulling="2025-10-26 07:49:58.987996208 +0000 UTC m=+119.072343910" lastFinishedPulling="2025-10-26 07:50:00.246798583 +0000 UTC m=+120.331146296" observedRunningTime="2025-10-26 07:50:00.464369642 +0000 UTC m=+120.548717366" watchObservedRunningTime="2025-10-26 07:50:00.466147656 +0000 UTC m=+120.550495380"
	
	
	==> storage-provisioner [3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464] <==
	W1026 07:49:44.164345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:46.167112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:46.171863       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:48.175188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:48.178648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:50.181328       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:50.186128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:52.189546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:52.193166       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:54.196101       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:54.199815       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:56.202448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:56.206104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:58.209353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:49:58.212985       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:50:00.215745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:50:00.219066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:50:02.221892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:50:02.226357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:50:04.228752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:50:04.233279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:50:06.236014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:50:06.240383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:50:08.243745       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:50:08.248851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-610291 -n addons-610291
helpers_test.go:269: (dbg) Run:  kubectl --context addons-610291 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-pdppz ingress-nginx-admission-patch-d6z8s registry-creds-764b6fb674-4mf5m
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-610291 describe pod ingress-nginx-admission-create-pdppz ingress-nginx-admission-patch-d6z8s registry-creds-764b6fb674-4mf5m
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-610291 describe pod ingress-nginx-admission-create-pdppz ingress-nginx-admission-patch-d6z8s registry-creds-764b6fb674-4mf5m: exit status 1 (58.340757ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-pdppz" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-d6z8s" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-4mf5m" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-610291 describe pod ingress-nginx-admission-create-pdppz ingress-nginx-admission-patch-d6z8s registry-creds-764b6fb674-4mf5m: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable headlamp --alsologtostderr -v=1: exit status 11 (241.237295ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:09.559791   23705 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:09.560093   23705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:09.560104   23705 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:09.560108   23705 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:09.560306   23705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:09.560564   23705 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:09.560859   23705 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:09.560871   23705 addons.go:606] checking whether the cluster is paused
	I1026 07:50:09.560946   23705 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:09.560957   23705 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:09.561322   23705 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:09.579131   23705 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:09.579204   23705 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:09.596412   23705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:09.695023   23705 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:09.695100   23705 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:09.723881   23705 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:09.723918   23705 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:09.723924   23705 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:09.723928   23705 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:09.723933   23705 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:09.723937   23705 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:09.723941   23705 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:09.723945   23705 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:09.723949   23705 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:09.723963   23705 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:09.723967   23705 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:09.723972   23705 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:09.723979   23705 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:09.723984   23705 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:09.723992   23705 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:09.724008   23705 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:09.724018   23705 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:09.724024   23705 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:09.724028   23705 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:09.724040   23705 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:09.724044   23705 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:09.724048   23705 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:09.724052   23705 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:09.724056   23705 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:09.724060   23705 cri.go:89] found id: ""
	I1026 07:50:09.724116   23705 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:09.738074   23705 out.go:203] 
	W1026 07:50:09.739119   23705 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:09.739138   23705 out.go:285] * 
	* 
	W1026 07:50:09.742336   23705 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:09.743542   23705 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.54s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-86bd5cbb97-h4cbz" [878dc796-cd3d-4402-bbe5-b6eeb7da2e94] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003081852s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (256.082627ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:25.383176   25619 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:25.383487   25619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:25.383499   25619 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:25.383503   25619 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:25.383749   25619 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:25.384111   25619 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:25.384575   25619 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:25.384594   25619 addons.go:606] checking whether the cluster is paused
	I1026 07:50:25.384715   25619 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:25.384738   25619 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:25.385149   25619 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:25.406725   25619 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:25.406777   25619 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:25.424877   25619 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:25.524572   25619 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:25.524643   25619 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:25.553864   25619 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:25.553884   25619 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:25.553889   25619 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:25.553893   25619 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:25.553896   25619 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:25.553901   25619 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:25.553904   25619 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:25.553908   25619 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:25.553912   25619 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:25.553937   25619 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:25.553947   25619 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:25.553951   25619 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:25.553955   25619 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:25.553959   25619 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:25.553963   25619 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:25.553973   25619 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:25.553978   25619 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:25.553983   25619 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:25.553986   25619 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:25.553988   25619 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:25.553991   25619 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:25.553993   25619 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:25.553996   25619 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:25.553999   25619 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:25.554001   25619 cri.go:89] found id: ""
	I1026 07:50:25.554044   25619 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:25.569689   25619 out.go:203] 
	W1026 07:50:25.570865   25619 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:25.570889   25619 out.go:285] * 
	* 
	W1026 07:50:25.574091   25619 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:25.575355   25619 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.15s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-610291 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-610291 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-610291 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [1b86e3ca-0fe9-4db8-945a-1e62c284a534] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [1b86e3ca-0fe9-4db8-945a-1e62c284a534] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [1b86e3ca-0fe9-4db8-945a-1e62c284a534] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.003472161s
addons_test.go:967: (dbg) Run:  kubectl --context addons-610291 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 ssh "cat /opt/local-path-provisioner/pvc-8572f8bb-02cc-4c0a-8349-02180884ca24_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-610291 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-610291 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (248.206933ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:28.740661   25948 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:28.740929   25948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:28.740938   25948 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:28.740942   25948 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:28.741264   25948 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:28.741527   25948 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:28.741841   25948 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:28.741853   25948 addons.go:606] checking whether the cluster is paused
	I1026 07:50:28.741928   25948 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:28.741943   25948 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:28.742324   25948 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:28.760528   25948 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:28.760599   25948 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:28.778363   25948 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:28.878184   25948 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:28.878297   25948 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:28.907169   25948 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:28.907195   25948 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:28.907201   25948 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:28.907205   25948 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:28.907210   25948 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:28.907216   25948 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:28.907220   25948 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:28.907224   25948 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:28.907228   25948 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:28.907236   25948 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:28.907241   25948 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:28.907245   25948 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:28.907262   25948 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:28.907266   25948 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:28.907270   25948 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:28.907282   25948 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:28.907291   25948 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:28.907297   25948 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:28.907300   25948 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:28.907303   25948 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:28.907306   25948 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:28.907308   25948 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:28.907310   25948 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:28.907313   25948 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:28.907315   25948 cri.go:89] found id: ""
	I1026 07:50:28.907356   25948 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:28.921626   25948 out.go:203] 
	W1026 07:50:28.923073   25948 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:28.923099   25948 out.go:285] * 
	* 
	W1026 07:50:28.926102   25948 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:28.927455   25948 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (10.15s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-9g5j7" [4b83bdea-b49d-4190-94d1-648aa449cddf] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003829826s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (312.787678ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:20.076465   24339 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:20.076692   24339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:20.076705   24339 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:20.076711   24339 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:20.077057   24339 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:20.077441   24339 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:20.077958   24339 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:20.077980   24339 addons.go:606] checking whether the cluster is paused
	I1026 07:50:20.078124   24339 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:20.078146   24339 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:20.078756   24339 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:20.103216   24339 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:20.103332   24339 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:20.128084   24339 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:20.244193   24339 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:20.244327   24339 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:20.284083   24339 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:20.284106   24339 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:20.284111   24339 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:20.284116   24339 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:20.284120   24339 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:20.284127   24339 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:20.284131   24339 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:20.284135   24339 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:20.284139   24339 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:20.284147   24339 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:20.284155   24339 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:20.284159   24339 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:20.284165   24339 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:20.284174   24339 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:20.284179   24339 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:20.284196   24339 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:20.284205   24339 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:20.284211   24339 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:20.284215   24339 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:20.284218   24339 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:20.284227   24339 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:20.284235   24339 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:20.284240   24339 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:20.284262   24339 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:20.284268   24339 cri.go:89] found id: ""
	I1026 07:50:20.284316   24339 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:20.302027   24339 out.go:203] 
	W1026 07:50:20.303462   24339 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:20.303485   24339 out.go:285] * 
	* 
	W1026 07:50:20.308227   24339 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:20.310315   24339 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.32s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-9mp2x" [21a76b20-54d9-4a96-bb2f-be564f1d8c44] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003650006s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable yakd --alsologtostderr -v=1: exit status 11 (246.069121ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:18.594469   24130 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:18.594794   24130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:18.594808   24130 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:18.594814   24130 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:18.595117   24130 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:18.595512   24130 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:18.596042   24130 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:18.596069   24130 addons.go:606] checking whether the cluster is paused
	I1026 07:50:18.596206   24130 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:18.596229   24130 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:18.596781   24130 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:18.614957   24130 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:18.615012   24130 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:18.632510   24130 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:18.731732   24130 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:18.731811   24130 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:18.761281   24130 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:18.761300   24130 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:18.761304   24130 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:18.761307   24130 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:18.761310   24130 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:18.761313   24130 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:18.761316   24130 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:18.761318   24130 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:18.761321   24130 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:18.761330   24130 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:18.761333   24130 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:18.761335   24130 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:18.761338   24130 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:18.761341   24130 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:18.761344   24130 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:18.761348   24130 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:18.761351   24130 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:18.761354   24130 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:18.761357   24130 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:18.761359   24130 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:18.761362   24130 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:18.761364   24130 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:18.761367   24130 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:18.761374   24130 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:18.761376   24130 cri.go:89] found id: ""
	I1026 07:50:18.761411   24130 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:18.775786   24130 out.go:203] 
	W1026 07:50:18.777063   24130 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:18Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:18Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:18.777100   24130 out.go:285] * 
	* 
	W1026 07:50:18.780163   24130 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:18.781660   24130 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.25s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-79j4j" [3de0f744-f685-4002-a0fb-987b69a28eed] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.002683282s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-610291 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-610291 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (245.540317ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:50:14.805330   23932 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:50:14.805669   23932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:14.805680   23932 out.go:374] Setting ErrFile to fd 2...
	I1026 07:50:14.805686   23932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:50:14.805864   23932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:50:14.806133   23932 mustload.go:65] Loading cluster: addons-610291
	I1026 07:50:14.806507   23932 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:14.806527   23932 addons.go:606] checking whether the cluster is paused
	I1026 07:50:14.806629   23932 config.go:182] Loaded profile config "addons-610291": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:50:14.806644   23932 host.go:66] Checking if "addons-610291" exists ...
	I1026 07:50:14.807156   23932 cli_runner.go:164] Run: docker container inspect addons-610291 --format={{.State.Status}}
	I1026 07:50:14.825666   23932 ssh_runner.go:195] Run: systemctl --version
	I1026 07:50:14.825743   23932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-610291
	I1026 07:50:14.843608   23932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/addons-610291/id_rsa Username:docker}
	I1026 07:50:14.943123   23932 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 07:50:14.943198   23932 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 07:50:14.972592   23932 cri.go:89] found id: "ff1e9c088f2c4cc9d8e3e5a30b43760b7ce45ad637a885f17af0d6476000aca5"
	I1026 07:50:14.972613   23932 cri.go:89] found id: "ae9bf07bc6fbd78664c8272a24d2f5ad7cbbcde69a042eb5f2ee4095bc0769bd"
	I1026 07:50:14.972617   23932 cri.go:89] found id: "790b75b33837b68a8925258b868dbfe3cd721f36196f7accb6e1e51d9661a81d"
	I1026 07:50:14.972620   23932 cri.go:89] found id: "6fbcc38580336dc58e7967ce2160d026b5fbb7fbff93898c9885ef009cee0767"
	I1026 07:50:14.972623   23932 cri.go:89] found id: "b52991ad016ebe425c14d28b6570992e98b0a4280e03a9db5de093f03e196a05"
	I1026 07:50:14.972628   23932 cri.go:89] found id: "d22cb2ff56e595813ca61e06c5c174e7250f3ef107e48bedde479cdd4c2260eb"
	I1026 07:50:14.972631   23932 cri.go:89] found id: "9787a3b2f3c6f52d31d9a26b4a18388e740d7c9220154d90c6920026356680e4"
	I1026 07:50:14.972634   23932 cri.go:89] found id: "48d279f0283b0f72c6d21fab7543d6eb626997511a639125bc9da635a1bab727"
	I1026 07:50:14.972636   23932 cri.go:89] found id: "4bec0c61b08badc889a72b40bb6794588f0ce7145df95c1f5f93348bbe5272bd"
	I1026 07:50:14.972642   23932 cri.go:89] found id: "a5104f945769b199ab8346b30b503a3417ba6cf910b2b0ce8775cad5df6c3578"
	I1026 07:50:14.972644   23932 cri.go:89] found id: "d0946ef457f293127e9a32204ecfe05d090a23e3e561d72169e5f6344a9a4545"
	I1026 07:50:14.972647   23932 cri.go:89] found id: "d78d517513eb7bf20ca1ee58af994dc958ada65ceac161f88772a4d8366245b4"
	I1026 07:50:14.972649   23932 cri.go:89] found id: "9606d1f8109dbf0b374bda01cc50f59753086dacc0b41139b641e428e568e230"
	I1026 07:50:14.972652   23932 cri.go:89] found id: "d0e6d85b2ec865a4ca6c9566a76bae0cd41653adcbc59d088d5e679b245147f8"
	I1026 07:50:14.972655   23932 cri.go:89] found id: "604c3b50083eab458c4c4467c6d608a282f99a8641ec1c0b85863ea9df1e48de"
	I1026 07:50:14.972666   23932 cri.go:89] found id: "2749efc7ce147f8a194950927b330b2d51cf22fd0c98564370a51d23e1c2e59f"
	I1026 07:50:14.972674   23932 cri.go:89] found id: "0d587f45003f45a1934ab37cbc0a4b671088a275320eeba48f46b4926029ffe5"
	I1026 07:50:14.972681   23932 cri.go:89] found id: "3504b65df25d511c2089434970c0f2f3bff63a9b65d31c823dcca738fd7af464"
	I1026 07:50:14.972690   23932 cri.go:89] found id: "e6e97259a969ccfffadc63a52be37ada69bfa4d151647c48dd8db95a603cd3c5"
	I1026 07:50:14.972696   23932 cri.go:89] found id: "4c0deee84eddbda3bf5fb7c81ef684154db5904881f1df092e26cbae9c23b99c"
	I1026 07:50:14.972698   23932 cri.go:89] found id: "aa644f5a3e4c491a05af01911dfbca65c2f9a7adf66486d638448d8f67ebfbce"
	I1026 07:50:14.972701   23932 cri.go:89] found id: "2190af960ec640f66dea545f622d0de357dc38a2e54d8720fdd9b5eef871121a"
	I1026 07:50:14.972704   23932 cri.go:89] found id: "f9726db7b5e9664780bcfca822cc520ade08871c09add50092e587a924c1c7c3"
	I1026 07:50:14.972709   23932 cri.go:89] found id: "a92d6c36860a88f4573f793a88499a8f3867c5cbf5b84dfa7694c74f128e8a83"
	I1026 07:50:14.972712   23932 cri.go:89] found id: ""
	I1026 07:50:14.972748   23932 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 07:50:14.986861   23932 out.go:203] 
	W1026 07:50:14.988041   23932 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:50:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 07:50:14.988062   23932 out.go:285] * 
	* 
	W1026 07:50:14.991001   23932 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 07:50:14.992358   23932 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-610291 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-852274 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-852274 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-n6snd" [1e0c535d-2e30-47fe-babc-d89929de25ad] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-852274 -n functional-852274
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-10-26 08:05:55.841164238 +0000 UTC m=+1126.248815867
functional_test.go:1645: (dbg) Run:  kubectl --context functional-852274 describe po hello-node-connect-7d85dfc575-n6snd -n default
functional_test.go:1645: (dbg) kubectl --context functional-852274 describe po hello-node-connect-7d85dfc575-n6snd -n default:
Name:             hello-node-connect-7d85dfc575-n6snd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-852274/192.168.49.2
Start Time:       Sun, 26 Oct 2025 07:55:55 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l76r9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-l76r9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-n6snd to functional-852274
Normal   Pulling    7m14s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m14s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m14s (x5 over 9m59s)   kubelet            Error: ErrImagePull
Warning  Failed     4m49s (x20 over 9m58s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m35s (x21 over 9m58s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-852274 logs hello-node-connect-7d85dfc575-n6snd -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-852274 logs hello-node-connect-7d85dfc575-n6snd -n default: exit status 1 (60.123962ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-n6snd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-852274 logs hello-node-connect-7d85dfc575-n6snd -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-852274 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-n6snd
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-852274/192.168.49.2
Start Time:       Sun, 26 Oct 2025 07:55:55 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l76r9 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-l76r9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-n6snd to functional-852274
Normal   Pulling    7m15s (x5 over 10m)     kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m15s (x5 over 10m)     kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m15s (x5 over 10m)     kubelet            Error: ErrImagePull
Warning  Failed     4m50s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m36s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-852274 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-852274 logs -l app=hello-node-connect: exit status 1 (61.061432ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-n6snd" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-852274 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-852274 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.105.145.218
IPs:                      10.105.145.218
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30139/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-852274
helpers_test.go:243: (dbg) docker inspect functional-852274:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d051df9b08f540dac03fc935b87d8d2ee1f3c59b65ea4ac3a83c53f4921de9fe",
	        "Created": "2025-10-26T07:53:57.565935055Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 37180,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T07:53:57.600010832Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/d051df9b08f540dac03fc935b87d8d2ee1f3c59b65ea4ac3a83c53f4921de9fe/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d051df9b08f540dac03fc935b87d8d2ee1f3c59b65ea4ac3a83c53f4921de9fe/hostname",
	        "HostsPath": "/var/lib/docker/containers/d051df9b08f540dac03fc935b87d8d2ee1f3c59b65ea4ac3a83c53f4921de9fe/hosts",
	        "LogPath": "/var/lib/docker/containers/d051df9b08f540dac03fc935b87d8d2ee1f3c59b65ea4ac3a83c53f4921de9fe/d051df9b08f540dac03fc935b87d8d2ee1f3c59b65ea4ac3a83c53f4921de9fe-json.log",
	        "Name": "/functional-852274",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-852274:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-852274",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d051df9b08f540dac03fc935b87d8d2ee1f3c59b65ea4ac3a83c53f4921de9fe",
	                "LowerDir": "/var/lib/docker/overlay2/240e1365bc7b6aaadb5c39a6e5926b06635d8f1699611f1cd1c00c5d7a6f230a-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/240e1365bc7b6aaadb5c39a6e5926b06635d8f1699611f1cd1c00c5d7a6f230a/merged",
	                "UpperDir": "/var/lib/docker/overlay2/240e1365bc7b6aaadb5c39a6e5926b06635d8f1699611f1cd1c00c5d7a6f230a/diff",
	                "WorkDir": "/var/lib/docker/overlay2/240e1365bc7b6aaadb5c39a6e5926b06635d8f1699611f1cd1c00c5d7a6f230a/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-852274",
	                "Source": "/var/lib/docker/volumes/functional-852274/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-852274",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-852274",
	                "name.minikube.sigs.k8s.io": "functional-852274",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6c31710296a08ac9035c2ce8f9cbc1921c7645ad7008b5ad1ddee2d835cef256",
	            "SandboxKey": "/var/run/docker/netns/6c31710296a0",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-852274": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:97:2c:83:3a:1e",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0b565247c1c2244a53fe6522298671e89082179c7fc034fadd651492d89f6e6c",
	                    "EndpointID": "076bd1f38fa938e397f9371a3707f3cc6dca01bb9ff658eaafc4f5fcfab9adf8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-852274",
	                        "d051df9b08f5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-852274 -n functional-852274
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-852274 logs -n 25: (1.277762503s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-852274 ssh findmnt -T /mount2                                                               │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:55 UTC │ 26 Oct 25 07:55 UTC │
	│ ssh            │ functional-852274 ssh findmnt -T /mount3                                                               │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:55 UTC │ 26 Oct 25 07:55 UTC │
	│ mount          │ -p functional-852274 --kill=true                                                                       │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:55 UTC │                     │
	│ ssh            │ functional-852274 ssh echo hello                                                                       │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:55 UTC │ 26 Oct 25 07:55 UTC │
	│ ssh            │ functional-852274 ssh cat /etc/hostname                                                                │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:55 UTC │ 26 Oct 25 07:55 UTC │
	│ tunnel         │ functional-852274 tunnel --alsologtostderr                                                             │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:55 UTC │                     │
	│ tunnel         │ functional-852274 tunnel --alsologtostderr                                                             │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:55 UTC │                     │
	│ tunnel         │ functional-852274 tunnel --alsologtostderr                                                             │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:55 UTC │                     │
	│ addons         │ functional-852274 addons list                                                                          │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:55 UTC │ 26 Oct 25 07:55 UTC │
	│ addons         │ functional-852274 addons list -o json                                                                  │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:55 UTC │ 26 Oct 25 07:55 UTC │
	│ update-context │ functional-852274 update-context --alsologtostderr -v=2                                                │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:56 UTC │ 26 Oct 25 07:56 UTC │
	│ update-context │ functional-852274 update-context --alsologtostderr -v=2                                                │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:56 UTC │ 26 Oct 25 07:56 UTC │
	│ update-context │ functional-852274 update-context --alsologtostderr -v=2                                                │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:56 UTC │ 26 Oct 25 07:56 UTC │
	│ image          │ functional-852274 image ls --format short --alsologtostderr                                            │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:56 UTC │ 26 Oct 25 07:56 UTC │
	│ ssh            │ functional-852274 ssh pgrep buildkitd                                                                  │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:56 UTC │                     │
	│ image          │ functional-852274 image build -t localhost/my-image:functional-852274 testdata/build --alsologtostderr │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:56 UTC │ 26 Oct 25 07:56 UTC │
	│ image          │ functional-852274 image ls                                                                             │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:56 UTC │ 26 Oct 25 07:56 UTC │
	│ image          │ functional-852274 image ls --format yaml --alsologtostderr                                             │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:56 UTC │ 26 Oct 25 07:56 UTC │
	│ image          │ functional-852274 image ls --format json --alsologtostderr                                             │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:56 UTC │ 26 Oct 25 07:56 UTC │
	│ image          │ functional-852274 image ls --format table --alsologtostderr                                            │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 07:56 UTC │ 26 Oct 25 07:56 UTC │
	│ service        │ functional-852274 service list                                                                         │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 08:05 UTC │ 26 Oct 25 08:05 UTC │
	│ service        │ functional-852274 service list -o json                                                                 │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 08:05 UTC │ 26 Oct 25 08:05 UTC │
	│ service        │ functional-852274 service --namespace=default --https --url hello-node                                 │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 08:05 UTC │                     │
	│ service        │ functional-852274 service hello-node --url --format={{.IP}}                                            │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 08:05 UTC │                     │
	│ service        │ functional-852274 service hello-node --url                                                             │ functional-852274 │ jenkins │ v1.37.0 │ 26 Oct 25 08:05 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 07:55:45
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 07:55:45.987842   49256 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:55:45.987934   49256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:55:45.987940   49256 out.go:374] Setting ErrFile to fd 2...
	I1026 07:55:45.987947   49256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:55:45.988301   49256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:55:45.988808   49256 out.go:368] Setting JSON to false
	I1026 07:55:45.990012   49256 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2297,"bootTime":1761463049,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 07:55:45.990122   49256 start.go:141] virtualization: kvm guest
	I1026 07:55:45.992281   49256 out.go:179] * [functional-852274] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1026 07:55:45.993894   49256 notify.go:220] Checking for updates...
	I1026 07:55:45.993921   49256 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 07:55:45.995290   49256 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 07:55:45.996615   49256 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 07:55:45.998163   49256 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 07:55:45.999492   49256 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 07:55:46.000779   49256 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 07:55:46.002415   49256 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:55:46.002944   49256 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 07:55:46.034928   49256 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 07:55:46.035049   49256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 07:55:46.103435   49256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-26 07:55:46.08889079 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 07:55:46.103528   49256 docker.go:318] overlay module found
	I1026 07:55:46.105117   49256 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1026 07:55:46.106567   49256 start.go:305] selected driver: docker
	I1026 07:55:46.106587   49256 start.go:925] validating driver "docker" against &{Name:functional-852274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-852274 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 07:55:46.106694   49256 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 07:55:46.108368   49256 out.go:203] 
	W1026 07:55:46.109668   49256 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1026 07:55:46.110826   49256 out.go:203] 
	
	
	==> CRI-O <==
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.866271712Z" level=info msg="Started container" PID=8017 containerID=f24871d646d704d7c4487b12019ee2722ab3727e4c68b444c6e95a0516bd25b3 description=default/sp-pod/myfrontend id=1aa75f9f-0208-4b66-a0c5-3d7262b731a2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e8ffe81e7fbfc692374975e8b71b0afd8e593ba257a6b454860779d9127a8cc5
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.874069527Z" level=info msg="Stopping pod sandbox: 9f59a1224207ea8cbf67c5cc75c755e2f0c69764635c7f29051dfcde765a3760" id=9551f218-f679-414a-af5f-9a2c81cd4fd1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.874130477Z" level=info msg="Stopped pod sandbox (already stopped): 9f59a1224207ea8cbf67c5cc75c755e2f0c69764635c7f29051dfcde765a3760" id=9551f218-f679-414a-af5f-9a2c81cd4fd1 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.874606976Z" level=info msg="Removing pod sandbox: 9f59a1224207ea8cbf67c5cc75c755e2f0c69764635c7f29051dfcde765a3760" id=e87a38cc-2a53-4687-b633-d7438bdb8e2a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.876797284Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.876847994Z" level=info msg="Removed pod sandbox: 9f59a1224207ea8cbf67c5cc75c755e2f0c69764635c7f29051dfcde765a3760" id=e87a38cc-2a53-4687-b633-d7438bdb8e2a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.877245018Z" level=info msg="Stopping pod sandbox: 0d75d10e74775153184607e991b948bc40805a8588c03deed029d32c9a387eb7" id=d954b284-46bd-4ce2-ad28-ed190c154ce3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.877341332Z" level=info msg="Stopped pod sandbox (already stopped): 0d75d10e74775153184607e991b948bc40805a8588c03deed029d32c9a387eb7" id=d954b284-46bd-4ce2-ad28-ed190c154ce3 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.877619253Z" level=info msg="Removing pod sandbox: 0d75d10e74775153184607e991b948bc40805a8588c03deed029d32c9a387eb7" id=5bb51ad5-05ca-4e3b-8e88-508af24c6814 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.879949322Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.879997246Z" level=info msg="Removed pod sandbox: 0d75d10e74775153184607e991b948bc40805a8588c03deed029d32c9a387eb7" id=5bb51ad5-05ca-4e3b-8e88-508af24c6814 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.880334028Z" level=info msg="Stopping pod sandbox: 712f2fed3d5a248453dfe0b8f96a6f9fae77c7fa3fd04999d0793adb030e4cab" id=86e575fc-d24d-4a5b-8915-ce620789e604 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.880368281Z" level=info msg="Stopped pod sandbox (already stopped): 712f2fed3d5a248453dfe0b8f96a6f9fae77c7fa3fd04999d0793adb030e4cab" id=86e575fc-d24d-4a5b-8915-ce620789e604 name=/runtime.v1.RuntimeService/StopPodSandbox
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.880623209Z" level=info msg="Removing pod sandbox: 712f2fed3d5a248453dfe0b8f96a6f9fae77c7fa3fd04999d0793adb030e4cab" id=cb856dae-06d0-4dfa-b040-6207185efad1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.882689522Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 07:56:10 functional-852274 crio[3570]: time="2025-10-26T07:56:10.882753802Z" level=info msg="Removed pod sandbox: 712f2fed3d5a248453dfe0b8f96a6f9fae77c7fa3fd04999d0793adb030e4cab" id=cb856dae-06d0-4dfa-b040-6207185efad1 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Oct 26 07:56:11 functional-852274 crio[3570]: time="2025-10-26T07:56:11.888702262Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=43e4e163-168e-4744-8882-a900b6fd953c name=/runtime.v1.ImageService/PullImage
	Oct 26 07:56:13 functional-852274 crio[3570]: time="2025-10-26T07:56:13.888117898Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=14400e6a-b06b-4575-9e2d-aad30bf007d8 name=/runtime.v1.ImageService/PullImage
	Oct 26 07:56:37 functional-852274 crio[3570]: time="2025-10-26T07:56:37.888194161Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=b33c56fd-6a29-4eb2-bee8-1b5e6a40aa16 name=/runtime.v1.ImageService/PullImage
	Oct 26 07:57:04 functional-852274 crio[3570]: time="2025-10-26T07:57:04.888852773Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c129e38b-4ed3-47af-a8e2-1dd00b5ed3da name=/runtime.v1.ImageService/PullImage
	Oct 26 07:57:18 functional-852274 crio[3570]: time="2025-10-26T07:57:18.888760684Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=45db756d-80d0-4383-bac5-9637129e0f6b name=/runtime.v1.ImageService/PullImage
	Oct 26 07:58:36 functional-852274 crio[3570]: time="2025-10-26T07:58:36.888412007Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=98e12119-0513-4526-accc-a3d6363945e4 name=/runtime.v1.ImageService/PullImage
	Oct 26 07:58:41 functional-852274 crio[3570]: time="2025-10-26T07:58:41.888055988Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=02d18c7a-fcd6-436b-99e4-2048351d2bed name=/runtime.v1.ImageService/PullImage
	Oct 26 08:01:30 functional-852274 crio[3570]: time="2025-10-26T08:01:30.888710026Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=1d50c86d-45d8-41e0-a8ff-2feb0c356198 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:01:34 functional-852274 crio[3570]: time="2025-10-26T08:01:34.888728117Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ef34c761-74fe-41ec-8590-0ac1f3d54a21 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f24871d646d70       docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8                  9 minutes ago       Running             myfrontend                  0                   e8ffe81e7fbfc       sp-pod                                       default
	dd94b550c7922       docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e                  10 minutes ago      Running             nginx                       0                   e4be43b2690e8       nginx-svc                                    default
	81fad31d6f6e1       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   e4589814b2c6f       dashboard-metrics-scraper-77bf4d6c4c-gdn9f   kubernetes-dashboard
	bb38c213e1a31       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         10 minutes ago      Running             kubernetes-dashboard        0                   adf25ae837dd5       kubernetes-dashboard-855c9754f9-7ggjs        kubernetes-dashboard
	7f84c8ec6064a       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              10 minutes ago      Exited              mount-munger                0                   10930b91b3a58       busybox-mount                                default
	f47435a123a93       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   be789e01f30f1       mysql-5bb876957f-xdb42                       default
	5b60bdb69f0ac       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   c0dd6fc476879       kube-apiserver-functional-852274             kube-system
	faafe3575ca53       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   f08e1a6d8771f       kube-controller-manager-functional-852274    kube-system
	8cc6e21325a48       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   7a96a9fa6c676       kube-scheduler-functional-852274             kube-system
	1e24c150b2f9e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   3164f8bca787a       etcd-functional-852274                       kube-system
	8b70c52fc21a7       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     1                   f08e1a6d8771f       kube-controller-manager-functional-852274    kube-system
	cb8f812053c36       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   f83b62dab991a       kube-proxy-s5mlz                             kube-system
	38baa04b89edd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   3eede4934cfa4       kindnet-6bgbm                                kube-system
	7fd07d0f0dd03       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         1                   365c6f6ec0961       storage-provisioner                          kube-system
	3808583e78a4c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   3605a89127ea4       coredns-66bc5c9577-8vt45                     kube-system
	6d6c20939122c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   3605a89127ea4       coredns-66bc5c9577-8vt45                     kube-system
	6ff30d5c4be9f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   365c6f6ec0961       storage-provisioner                          kube-system
	ecfcb151513bd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   3eede4934cfa4       kindnet-6bgbm                                kube-system
	c4cfed38a463e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   f83b62dab991a       kube-proxy-s5mlz                             kube-system
	94bde1150ab27       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   3164f8bca787a       etcd-functional-852274                       kube-system
	1f60aaff38436       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   7a96a9fa6c676       kube-scheduler-functional-852274             kube-system
	
	
	==> coredns [3808583e78a4cf0333da08ec3ea4abe8cfb9b37a5e12ffcc8988a81953f726a0] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33371 - 5578 "HINFO IN 6078813331373108768.8146692464262253471. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.033453496s
	
	
	==> coredns [6d6c20939122c8175150851468b3a84554e028d215c8789ae0ff069e8e78ea3b] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40102 - 49254 "HINFO IN 2101633484708544293.5384898301707806136. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029987607s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-852274
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-852274
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=functional-852274
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T07_54_11_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 07:54:08 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-852274
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:05:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:05:54 +0000   Sun, 26 Oct 2025 07:54:06 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:05:54 +0000   Sun, 26 Oct 2025 07:54:06 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:05:54 +0000   Sun, 26 Oct 2025 07:54:06 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:05:54 +0000   Sun, 26 Oct 2025 07:54:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-852274
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                8bfb4cf5-344a-449f-954e-5e52d13e6797
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-xsfbm                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-n6snd           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-xdb42                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m47s
	  kube-system                 coredns-66bc5c9577-8vt45                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-852274                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-6bgbm                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-852274              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-852274     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-s5mlz                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-852274              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-gdn9f    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7ggjs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-852274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-852274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-852274 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-852274 event: Registered Node functional-852274 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-852274 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x9 over 10m)  kubelet          Node functional-852274 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-852274 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-852274 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-852274 event: Registered Node functional-852274 in Controller
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [1e24c150b2f9eb426b416048b27a86595c6eaa848e9ff7e60fce62151a707341] <==
	{"level":"warn","ts":"2025-10-26T07:55:12.609277Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.619321Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42444","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.625495Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42462","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.631267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.637015Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.642715Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.648885Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.655609Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.661494Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42562","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.667414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.673241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.679153Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.691511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.698355Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.704500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.710528Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.716680Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.722743Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.737953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.744069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.749909Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:55:12.798716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42806","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T08:05:12.346070Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1146}
	{"level":"info","ts":"2025-10-26T08:05:12.365906Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1146,"took":"19.490785ms","hash":3661731665,"current-db-size-bytes":3538944,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1585152,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-10-26T08:05:12.365958Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3661731665,"revision":1146,"compact-revision":-1}
	
	
	==> etcd [94bde1150ab27ab671efec7cca47759f5b3be26c888f847a482d019dea676d71] <==
	{"level":"warn","ts":"2025-10-26T07:54:07.620206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:54:07.628300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:54:07.634583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:54:07.652292Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:54:07.658659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:54:07.665884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T07:54:07.712068Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33820","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T07:54:51.800365Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-10-26T07:54:51.800439Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-852274","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-10-26T07:54:51.800540Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T07:54:58.802007Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-10-26T07:54:58.803712Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T07:54:58.803792Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-10-26T07:54:58.803909Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-10-26T07:54:58.803919Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-10-26T07:54:58.803998Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T07:54:58.804059Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T07:54:58.804071Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-10-26T07:54:58.804132Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-10-26T07:54:58.804149Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-10-26T07:54:58.804158Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T07:54:58.806609Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-10-26T07:54:58.806703Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-10-26T07:54:58.806741Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-10-26T07:54:58.806748Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-852274","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 08:05:57 up 48 min,  0 user,  load average: 0.03, 0.21, 0.37
	Linux functional-852274 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [38baa04b89eddfd6d1c04b174006e741cdbd32e124c61220280234968dfc12ea] <==
	I1026 08:03:52.223324       1 main.go:301] handling current node
	I1026 08:04:02.223674       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:04:02.223711       1 main.go:301] handling current node
	I1026 08:04:12.223447       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:04:12.223500       1 main.go:301] handling current node
	I1026 08:04:22.223657       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:04:22.223692       1 main.go:301] handling current node
	I1026 08:04:32.223968       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:04:32.224000       1 main.go:301] handling current node
	I1026 08:04:42.223747       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:04:42.223783       1 main.go:301] handling current node
	I1026 08:04:52.223437       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:04:52.223465       1 main.go:301] handling current node
	I1026 08:05:02.223876       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:05:02.223940       1 main.go:301] handling current node
	I1026 08:05:12.223545       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:05:12.223582       1 main.go:301] handling current node
	I1026 08:05:22.223744       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:05:22.223776       1 main.go:301] handling current node
	I1026 08:05:32.223569       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:05:32.223599       1 main.go:301] handling current node
	I1026 08:05:42.223587       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:05:42.223615       1 main.go:301] handling current node
	I1026 08:05:52.224014       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 08:05:52.224044       1 main.go:301] handling current node
	
	
	==> kindnet [ecfcb151513bd26f80c3ecb1f74f3dc0ff9494317c3ef8d6e6c65b28934e47cf] <==
	I1026 07:54:16.780613       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 07:54:16.780882       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1026 07:54:16.781019       1 main.go:148] setting mtu 1500 for CNI 
	I1026 07:54:16.781034       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 07:54:16.781044       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T07:54:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 07:54:17.078575       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 07:54:17.078602       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 07:54:17.078614       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 07:54:17.078741       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 07:54:17.278701       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 07:54:17.278726       1 metrics.go:72] Registering metrics
	I1026 07:54:17.278785       1 controller.go:711] "Syncing nftables rules"
	I1026 07:54:27.079409       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:54:27.079482       1 main.go:301] handling current node
	I1026 07:54:37.085744       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:54:37.085777       1 main.go:301] handling current node
	I1026 07:54:47.079788       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1026 07:54:47.079827       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5b60bdb69f0ac5acaa7967f921a27f09c1f7ae31248d15faefda3ebe6dad86a1] <==
	I1026 07:55:13.969406       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 07:55:14.140676       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1026 07:55:14.346525       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1026 07:55:14.348084       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 07:55:14.352429       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 07:55:14.732306       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 07:55:14.822291       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 07:55:14.870600       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 07:55:14.875843       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 07:55:17.265859       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 07:55:33.154870       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.109.4.216"}
	I1026 07:55:37.178555       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.108.162.224"}
	I1026 07:55:38.426946       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.232.158"}
	I1026 07:55:46.961091       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 07:55:47.072509       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.149.179"}
	I1026 07:55:47.082207       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.248.83"}
	E1026 07:55:50.590568       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53342: use of closed network connection
	E1026 07:55:52.114873       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:53362: use of closed network connection
	E1026 07:55:53.832406       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35414: use of closed network connection
	I1026 07:55:55.088191       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.103.180.49"}
	E1026 07:55:55.236792       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35452: use of closed network connection
	I1026 07:55:55.514673       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.105.145.218"}
	E1026 07:56:09.038489       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:59872: use of closed network connection
	E1026 07:56:17.336386       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:42222: use of closed network connection
	I1026 08:05:13.179099       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [8b70c52fc21a7d85545334923c4e6584f53f495bb7a6a130509c9776f6cd501f] <==
	I1026 07:55:01.434674       1 shared_informer.go:349] "Waiting for caches to sync" controller="deployment"
	I1026 07:55:01.634418       1 controllermanager.go:781] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I1026 07:55:01.634459       1 horizontal.go:205] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I1026 07:55:01.634489       1 shared_informer.go:349] "Waiting for caches to sync" controller="HPA"
	I1026 07:55:01.684473       1 controllermanager.go:781] "Started controller" controller="cronjob-controller"
	I1026 07:55:01.684500       1 controllermanager.go:739] "Skipping a cloud provider controller" controller="cloud-node-lifecycle-controller"
	I1026 07:55:01.684618       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1026 07:55:01.684633       1 shared_informer.go:349] "Waiting for caches to sync" controller="cronjob"
	I1026 07:55:01.734463       1 controllermanager.go:781] "Started controller" controller="service-cidr-controller"
	I1026 07:55:01.734558       1 servicecidrs_controller.go:137] "Starting" logger="service-cidr-controller" controller="service-cidr-controller"
	I1026 07:55:01.734570       1 shared_informer.go:349] "Waiting for caches to sync" controller="service-cidr-controller"
	I1026 07:55:01.784373       1 controllermanager.go:781] "Started controller" controller="replicaset-controller"
	I1026 07:55:01.784493       1 replica_set.go:243] "Starting controller" logger="replicaset-controller" name="replicaset"
	I1026 07:55:01.784511       1 shared_informer.go:349] "Waiting for caches to sync" controller="ReplicaSet"
	I1026 07:55:01.835173       1 controllermanager.go:781] "Started controller" controller="endpointslice-controller"
	I1026 07:55:01.835459       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1026 07:55:01.835490       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint_slice"
	I1026 07:55:01.851793       1 shared_informer.go:356] "Caches are synced" controller="tokens"
	I1026 07:55:01.883907       1 controllermanager.go:781] "Started controller" controller="pod-garbage-collector-controller"
	I1026 07:55:01.883953       1 gc_controller.go:99] "Starting GC controller" logger="pod-garbage-collector-controller"
	I1026 07:55:01.883961       1 shared_informer.go:349] "Waiting for caches to sync" controller="GC"
	I1026 07:55:01.936704       1 controllermanager.go:781] "Started controller" controller="serviceaccount-controller"
	I1026 07:55:01.936768       1 serviceaccounts_controller.go:114] "Starting service account controller" logger="serviceaccount-controller"
	I1026 07:55:01.936780       1 shared_informer.go:349] "Waiting for caches to sync" controller="service account"
	F1026 07:55:01.981579       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/certificate-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [faafe3575ca53ae500bde828d2cd66107c507613188f25911f2dbfb9d7efb0ac] <==
	I1026 07:55:16.555682       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 07:55:16.556568       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 07:55:16.556639       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 07:55:16.557923       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 07:55:16.560433       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 07:55:16.560486       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 07:55:16.560498       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 07:55:16.560552       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 07:55:16.560567       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 07:55:16.560575       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 07:55:16.561591       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 07:55:16.562675       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 07:55:16.564968       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 07:55:16.567130       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 07:55:16.570406       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 07:55:16.571579       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1026 07:55:47.008216       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 07:55:47.022144       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 07:55:47.022395       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 07:55:47.027848       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 07:55:47.028272       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 07:55:47.035584       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 07:55:47.038644       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1026 07:55:47.041458       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1026 07:55:47.087771       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [c4cfed38a463e221bbfdcfaa00acf1b9da9457d7b4b248e9d7287c8ed01eebf0] <==
	I1026 07:54:16.673507       1 server_linux.go:53] "Using iptables proxy"
	I1026 07:54:16.742878       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 07:54:16.843606       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 07:54:16.843665       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 07:54:16.843762       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 07:54:16.861843       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 07:54:16.861897       1 server_linux.go:132] "Using iptables Proxier"
	I1026 07:54:16.866977       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 07:54:16.867368       1 server.go:527] "Version info" version="v1.34.1"
	I1026 07:54:16.867385       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 07:54:16.868643       1 config.go:106] "Starting endpoint slice config controller"
	I1026 07:54:16.868669       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 07:54:16.868699       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 07:54:16.868705       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 07:54:16.868729       1 config.go:309] "Starting node config controller"
	I1026 07:54:16.868737       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 07:54:16.868738       1 config.go:200] "Starting service config controller"
	I1026 07:54:16.868760       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 07:54:16.968826       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 07:54:16.968858       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 07:54:16.968868       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 07:54:16.968960       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [cb8f812053c365dd1439c55ebf6dfe7d805e63c000e8b4895f974c3dc2f13cba] <==
	I1026 07:54:52.766739       1 server_linux.go:53] "Using iptables proxy"
	I1026 07:54:52.847052       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 07:54:52.947734       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 07:54:52.947780       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1026 07:54:52.947929       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 07:54:52.968039       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 07:54:52.968107       1 server_linux.go:132] "Using iptables Proxier"
	I1026 07:54:52.973900       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 07:54:52.974230       1 server.go:527] "Version info" version="v1.34.1"
	I1026 07:54:52.974297       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 07:54:52.976565       1 config.go:200] "Starting service config controller"
	I1026 07:54:52.976646       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 07:54:52.976672       1 config.go:106] "Starting endpoint slice config controller"
	I1026 07:54:52.976685       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 07:54:52.976746       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 07:54:52.976943       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 07:54:52.976750       1 config.go:309] "Starting node config controller"
	I1026 07:54:52.976964       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 07:54:52.976971       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 07:54:53.077772       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 07:54:53.077805       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 07:54:53.077806       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [1f60aaff38436889a924f537dd293c22f4f5c6668c35a810a992983617ad283b] <==
	E1026 07:54:08.105234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 07:54:08.105295       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 07:54:08.920685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 07:54:08.957187       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 07:54:08.957991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 07:54:08.962452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 07:54:09.033168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 07:54:09.042210       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 07:54:09.048219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 07:54:09.092481       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 07:54:09.131119       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 07:54:09.231755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 07:54:09.238832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 07:54:09.256864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 07:54:09.260880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 07:54:09.309460       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 07:54:09.315567       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 07:54:09.342864       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I1026 07:54:11.001025       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 07:55:09.422090       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 07:55:09.422104       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1026 07:55:09.422193       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1026 07:55:09.422223       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1026 07:55:09.422232       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1026 07:55:09.422286       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [8cc6e21325a484e7889502c1364295a7482ef177654129e63c15c7c6a7d26c2c] <==
	I1026 07:55:11.763356       1 serving.go:386] Generated self-signed cert in-memory
	W1026 07:55:13.156947       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 07:55:13.156982       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 07:55:13.157003       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 07:55:13.157014       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 07:55:13.181873       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 07:55:13.181906       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 07:55:13.184111       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 07:55:13.184144       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 07:55:13.184526       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 07:55:13.184559       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 07:55:13.284610       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:03:21 functional-852274 kubelet[4302]: E1026 08:03:21.887859    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xsfbm" podUID="8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01"
	Oct 26 08:03:23 functional-852274 kubelet[4302]: E1026 08:03:23.887480    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	Oct 26 08:03:35 functional-852274 kubelet[4302]: E1026 08:03:35.887362    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xsfbm" podUID="8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01"
	Oct 26 08:03:36 functional-852274 kubelet[4302]: E1026 08:03:36.888008    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	Oct 26 08:03:49 functional-852274 kubelet[4302]: E1026 08:03:49.888330    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	Oct 26 08:03:50 functional-852274 kubelet[4302]: E1026 08:03:50.888241    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xsfbm" podUID="8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01"
	Oct 26 08:04:02 functional-852274 kubelet[4302]: E1026 08:04:02.888367    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xsfbm" podUID="8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01"
	Oct 26 08:04:03 functional-852274 kubelet[4302]: E1026 08:04:03.887919    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	Oct 26 08:04:16 functional-852274 kubelet[4302]: E1026 08:04:16.887499    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	Oct 26 08:04:17 functional-852274 kubelet[4302]: E1026 08:04:17.887891    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xsfbm" podUID="8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01"
	Oct 26 08:04:29 functional-852274 kubelet[4302]: E1026 08:04:29.888015    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	Oct 26 08:04:30 functional-852274 kubelet[4302]: E1026 08:04:30.888003    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xsfbm" podUID="8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01"
	Oct 26 08:04:41 functional-852274 kubelet[4302]: E1026 08:04:41.888377    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xsfbm" podUID="8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01"
	Oct 26 08:04:41 functional-852274 kubelet[4302]: E1026 08:04:41.888477    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	Oct 26 08:04:54 functional-852274 kubelet[4302]: E1026 08:04:54.887645    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xsfbm" podUID="8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01"
	Oct 26 08:04:55 functional-852274 kubelet[4302]: E1026 08:04:55.888240    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	Oct 26 08:05:06 functional-852274 kubelet[4302]: E1026 08:05:06.887614    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	Oct 26 08:05:08 functional-852274 kubelet[4302]: E1026 08:05:08.887503    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xsfbm" podUID="8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01"
	Oct 26 08:05:17 functional-852274 kubelet[4302]: E1026 08:05:17.887934    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	Oct 26 08:05:22 functional-852274 kubelet[4302]: E1026 08:05:22.888488    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xsfbm" podUID="8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01"
	Oct 26 08:05:32 functional-852274 kubelet[4302]: E1026 08:05:32.887676    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	Oct 26 08:05:37 functional-852274 kubelet[4302]: E1026 08:05:37.888125    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xsfbm" podUID="8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01"
	Oct 26 08:05:43 functional-852274 kubelet[4302]: E1026 08:05:43.888456    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	Oct 26 08:05:51 functional-852274 kubelet[4302]: E1026 08:05:51.888157    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-xsfbm" podUID="8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01"
	Oct 26 08:05:54 functional-852274 kubelet[4302]: E1026 08:05:54.887974    4302 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-n6snd" podUID="1e0c535d-2e30-47fe-babc-d89929de25ad"
	
	
	==> kubernetes-dashboard [bb38c213e1a3128998ad221abe25d9ecb8b873f7761a80d9e93f162aeae8f107] <==
	2025/10/26 07:55:50 Starting overwatch
	2025/10/26 07:55:50 Using namespace: kubernetes-dashboard
	2025/10/26 07:55:50 Using in-cluster config to connect to apiserver
	2025/10/26 07:55:50 Using secret token for csrf signing
	2025/10/26 07:55:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 07:55:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 07:55:50 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 07:55:50 Generating JWE encryption key
	2025/10/26 07:55:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 07:55:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 07:55:50 Initializing JWE encryption key from synchronized object
	2025/10/26 07:55:50 Creating in-cluster Sidecar client
	2025/10/26 07:55:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 07:55:50 Serving insecurely on HTTP port: 9090
	2025/10/26 07:56:20 Successful request to sidecar
	
	
	==> storage-provisioner [6ff30d5c4be9f8f930029e3c305c5ffbb9bc5ccb06d241458d6a188bb4a34c1c] <==
	W1026 07:54:28.011087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:28.015085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 07:54:28.109952       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-852274_0199aa23-7a19-4002-a5b6-af7ea574000b!
	W1026 07:54:30.018830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:30.022690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:32.025593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:32.028954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:34.032641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:34.036454       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:36.039956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:36.044713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:38.047421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:38.051188       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:40.054453       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:40.060050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:42.063423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:42.068660       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:44.071934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:44.076358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:46.079119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:46.082732       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:48.085977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:48.090756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:50.093744       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 07:54:50.098681       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [7fd07d0f0dd03685d0545f6d766a8b33a3547dce8ff0dbbacb607913c71bdd75] <==
	W1026 08:05:32.231313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:34.234552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:34.238209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:36.241421       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:36.246784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:38.249588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:38.253415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:40.256055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:40.260818       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:42.263688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:42.268281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:44.271930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:44.276227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:46.279157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:46.282911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:48.286227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:48.289780       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:50.293412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:50.298212       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:52.301289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:52.305181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:54.308751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:54.312856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:56.316551       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:05:56.320656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-852274 -n functional-852274
helpers_test.go:269: (dbg) Run:  kubectl --context functional-852274 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-xsfbm hello-node-connect-7d85dfc575-n6snd
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-852274 describe pod busybox-mount hello-node-75c85bcc94-xsfbm hello-node-connect-7d85dfc575-n6snd
helpers_test.go:290: (dbg) kubectl --context functional-852274 describe pod busybox-mount hello-node-75c85bcc94-xsfbm hello-node-connect-7d85dfc575-n6snd:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-852274/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 07:55:46 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  cri-o://7f84c8ec6064ab4de85f5c1f3acd2b97b26cfe9100b1d63c5f367da643ac0742
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sun, 26 Oct 2025 07:55:47 +0000
	      Finished:     Sun, 26 Oct 2025 07:55:47 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-s2s86 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-s2s86:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-852274
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.227s (1.227s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-xsfbm
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-852274/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 07:55:37 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-brr4f (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-brr4f:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-xsfbm to functional-852274
	  Normal   Pulling    7m22s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m22s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m22s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    21s (x41 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     21s (x41 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-n6snd
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-852274/192.168.49.2
	Start Time:       Sun, 26 Oct 2025 07:55:55 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:           10.244.0.10
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-l76r9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-l76r9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-n6snd to functional-852274
	  Normal   Pulling    7m17s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m17s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m17s (x5 over 10m)   kubelet            Error: ErrImagePull
	  Warning  Failed     4m52s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m38s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-852274 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-852274 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-xsfbm" [8e4a8ba2-0c69-47dd-b6c0-0e66f2262b01] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-852274 -n functional-852274
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-10-26 08:05:37.511222322 +0000 UTC m=+1107.918873959
functional_test.go:1460: (dbg) Run:  kubectl --context functional-852274 describe po hello-node-75c85bcc94-xsfbm -n default
functional_test.go:1460: (dbg) kubectl --context functional-852274 describe po hello-node-75c85bcc94-xsfbm -n default:
Name:             hello-node-75c85bcc94-xsfbm
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-852274/192.168.49.2
Start Time:       Sun, 26 Oct 2025 07:55:37 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-brr4f (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-brr4f:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-xsfbm to functional-852274
Normal   Pulling    7m1s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m1s (x5 over 10m)    kubelet            Error: ErrImagePull
Normal   BackOff    4m51s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m51s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-852274 logs hello-node-75c85bcc94-xsfbm -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-852274 logs hello-node-75c85bcc94-xsfbm -n default: exit status 1 (71.026447ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-xsfbm" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-852274 logs hello-node-75c85bcc94-xsfbm -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image load --daemon kicbase/echo-server:functional-852274 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-852274" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.97s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image load --daemon kicbase/echo-server:functional-852274 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-852274" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-852274
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image load --daemon kicbase/echo-server:functional-852274 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-852274 image load --daemon kicbase/echo-server:functional-852274 --alsologtostderr: (1.550011196s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image ls
functional_test.go:466: (dbg) Done: out/minikube-linux-amd64 -p functional-852274 image ls: (2.305901044s)
functional_test.go:461: expected "kicbase/echo-server:functional-852274" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image save kicbase/echo-server:functional-852274 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1026 07:55:45.343848   48908 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:55:45.344272   48908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:55:45.344287   48908 out.go:374] Setting ErrFile to fd 2...
	I1026 07:55:45.344293   48908 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:55:45.344597   48908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:55:45.345417   48908 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:55:45.345562   48908 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:55:45.346100   48908 cli_runner.go:164] Run: docker container inspect functional-852274 --format={{.State.Status}}
	I1026 07:55:45.366968   48908 ssh_runner.go:195] Run: systemctl --version
	I1026 07:55:45.367034   48908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-852274
	I1026 07:55:45.388033   48908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/functional-852274/id_rsa Username:docker}
	I1026 07:55:45.487173   48908 cache_images.go:290] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1026 07:55:45.487272   48908 cache_images.go:254] Failed to load cached images for "functional-852274": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1026 07:55:45.487313   48908 cache_images.go:266] failed pushing to: functional-852274

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-852274
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image save --daemon kicbase/echo-server:functional-852274 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-852274
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-852274: exit status 1 (20.958833ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-852274

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-852274

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 service --namespace=default --https --url hello-node: exit status 115 (536.56336ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30523
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-852274 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 service hello-node --url --format={{.IP}}: exit status 115 (545.862016ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-852274 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 service hello-node --url: exit status 115 (534.880416ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30523
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-852274 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30523
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.54s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.27s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-677217 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-677217 --output=json --user=testUser: exit status 80 (2.264977177s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0ae1b926-7b92-43f0-bc40-ebc3a21728c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-677217 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"8b7cd9dc-6fac-42ad-a7ff-e4e5ff1f3fe6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-26T08:14:04Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"087c0e27-6a2c-4740-a87a-7a0d4b32d9d9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-677217 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.27s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.8s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-677217 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-677217 --output=json --user=testUser: exit status 80 (1.804431495s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ef62967a-e5f6-4305-8327-60a99dda6856","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-677217 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"909de391-c4de-4469-bad0-c170c95e2fe0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-10-26T08:14:06Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"96fd99a0-b6b1-4393-bb74-5ac5f30193cf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-677217 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.80s)

                                                
                                    
x
+
TestPause/serial/Pause (6.03s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-504806 --alsologtostderr -v=5
E1026 08:27:00.252414   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-504806 --alsologtostderr -v=5: exit status 80 (2.620390632s)

                                                
                                                
-- stdout --
	* Pausing node pause-504806 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:26:57.793837  196686 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:26:57.794128  196686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:26:57.794139  196686 out.go:374] Setting ErrFile to fd 2...
	I1026 08:26:57.794145  196686 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:26:57.794420  196686 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:26:57.794712  196686 out.go:368] Setting JSON to false
	I1026 08:26:57.794764  196686 mustload.go:65] Loading cluster: pause-504806
	I1026 08:26:57.795294  196686 config.go:182] Loaded profile config "pause-504806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:26:57.795874  196686 cli_runner.go:164] Run: docker container inspect pause-504806 --format={{.State.Status}}
	I1026 08:26:57.812789  196686 host.go:66] Checking if "pause-504806" exists ...
	I1026 08:26:57.813056  196686 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:26:57.871538  196686 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:82 SystemTime:2025-10-26 08:26:57.861479683 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:26:57.872155  196686 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-504806 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 08:26:58.011993  196686 out.go:179] * Pausing node pause-504806 ... 
	I1026 08:26:58.085299  196686 host.go:66] Checking if "pause-504806" exists ...
	I1026 08:26:58.085606  196686 ssh_runner.go:195] Run: systemctl --version
	I1026 08:26:58.085660  196686 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:58.105439  196686 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/pause-504806/id_rsa Username:docker}
	I1026 08:26:58.203825  196686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:26:58.216460  196686 pause.go:52] kubelet running: true
	I1026 08:26:58.216516  196686 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:26:58.348827  196686 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:26:58.348918  196686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:26:58.414716  196686 cri.go:89] found id: "4142ecdf1fe30029dbfe7b06d257a2cd3f8a1a259d6e1e656fa68fb6b6f48f60"
	I1026 08:26:58.414741  196686 cri.go:89] found id: "65936c8bb6486e4c862dabe5143e4456e412dde42e71c121cca6af8ced39b26b"
	I1026 08:26:58.414746  196686 cri.go:89] found id: "bdbccec25f128438488e48390310c61c2e866b1dd32e2b66f0c12735a239f9b0"
	I1026 08:26:58.414751  196686 cri.go:89] found id: "d6bff8cede97952be272b19ac001db58dffafae8ec651ec1949c3946e1a69f0e"
	I1026 08:26:58.414755  196686 cri.go:89] found id: "e382b82319af9e2a4edf1b892db5b91bca0282ac246cfb7c71726684226b98ec"
	I1026 08:26:58.414759  196686 cri.go:89] found id: "8d285175a1f0637dbd439b948468559b78c5baf706c85d4392df8f983fb8db67"
	I1026 08:26:58.414763  196686 cri.go:89] found id: "fcec7a37f3c1b7712f65f3f276cd8dbcc20be3d019eba5ee54f6ecb649c99cc5"
	I1026 08:26:58.414766  196686 cri.go:89] found id: ""
	I1026 08:26:58.414823  196686 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:26:58.426284  196686 retry.go:31] will retry after 370.723014ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:26:58Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:26:58.797916  196686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:26:58.810970  196686 pause.go:52] kubelet running: false
	I1026 08:26:58.811024  196686 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:26:58.935767  196686 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:26:58.935947  196686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:26:59.016603  196686 cri.go:89] found id: "4142ecdf1fe30029dbfe7b06d257a2cd3f8a1a259d6e1e656fa68fb6b6f48f60"
	I1026 08:26:59.016631  196686 cri.go:89] found id: "65936c8bb6486e4c862dabe5143e4456e412dde42e71c121cca6af8ced39b26b"
	I1026 08:26:59.016637  196686 cri.go:89] found id: "bdbccec25f128438488e48390310c61c2e866b1dd32e2b66f0c12735a239f9b0"
	I1026 08:26:59.016642  196686 cri.go:89] found id: "d6bff8cede97952be272b19ac001db58dffafae8ec651ec1949c3946e1a69f0e"
	I1026 08:26:59.016646  196686 cri.go:89] found id: "e382b82319af9e2a4edf1b892db5b91bca0282ac246cfb7c71726684226b98ec"
	I1026 08:26:59.016650  196686 cri.go:89] found id: "8d285175a1f0637dbd439b948468559b78c5baf706c85d4392df8f983fb8db67"
	I1026 08:26:59.016654  196686 cri.go:89] found id: "fcec7a37f3c1b7712f65f3f276cd8dbcc20be3d019eba5ee54f6ecb649c99cc5"
	I1026 08:26:59.016657  196686 cri.go:89] found id: ""
	I1026 08:26:59.016723  196686 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:26:59.029381  196686 retry.go:31] will retry after 433.714466ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:26:59Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:26:59.464079  196686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:26:59.481450  196686 pause.go:52] kubelet running: false
	I1026 08:26:59.481580  196686 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:26:59.639435  196686 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:26:59.639521  196686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:26:59.710632  196686 cri.go:89] found id: "4142ecdf1fe30029dbfe7b06d257a2cd3f8a1a259d6e1e656fa68fb6b6f48f60"
	I1026 08:26:59.710660  196686 cri.go:89] found id: "65936c8bb6486e4c862dabe5143e4456e412dde42e71c121cca6af8ced39b26b"
	I1026 08:26:59.710666  196686 cri.go:89] found id: "bdbccec25f128438488e48390310c61c2e866b1dd32e2b66f0c12735a239f9b0"
	I1026 08:26:59.710670  196686 cri.go:89] found id: "d6bff8cede97952be272b19ac001db58dffafae8ec651ec1949c3946e1a69f0e"
	I1026 08:26:59.710675  196686 cri.go:89] found id: "e382b82319af9e2a4edf1b892db5b91bca0282ac246cfb7c71726684226b98ec"
	I1026 08:26:59.710679  196686 cri.go:89] found id: "8d285175a1f0637dbd439b948468559b78c5baf706c85d4392df8f983fb8db67"
	I1026 08:26:59.710684  196686 cri.go:89] found id: "fcec7a37f3c1b7712f65f3f276cd8dbcc20be3d019eba5ee54f6ecb649c99cc5"
	I1026 08:26:59.710687  196686 cri.go:89] found id: ""
	I1026 08:26:59.710731  196686 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:26:59.723741  196686 retry.go:31] will retry after 409.407171ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:26:59Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:27:00.133326  196686 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:27:00.146982  196686 pause.go:52] kubelet running: false
	I1026 08:27:00.147032  196686 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:27:00.266351  196686 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:27:00.266445  196686 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:27:00.334727  196686 cri.go:89] found id: "4142ecdf1fe30029dbfe7b06d257a2cd3f8a1a259d6e1e656fa68fb6b6f48f60"
	I1026 08:27:00.334749  196686 cri.go:89] found id: "65936c8bb6486e4c862dabe5143e4456e412dde42e71c121cca6af8ced39b26b"
	I1026 08:27:00.334753  196686 cri.go:89] found id: "bdbccec25f128438488e48390310c61c2e866b1dd32e2b66f0c12735a239f9b0"
	I1026 08:27:00.334757  196686 cri.go:89] found id: "d6bff8cede97952be272b19ac001db58dffafae8ec651ec1949c3946e1a69f0e"
	I1026 08:27:00.334759  196686 cri.go:89] found id: "e382b82319af9e2a4edf1b892db5b91bca0282ac246cfb7c71726684226b98ec"
	I1026 08:27:00.334762  196686 cri.go:89] found id: "8d285175a1f0637dbd439b948468559b78c5baf706c85d4392df8f983fb8db67"
	I1026 08:27:00.334764  196686 cri.go:89] found id: "fcec7a37f3c1b7712f65f3f276cd8dbcc20be3d019eba5ee54f6ecb649c99cc5"
	I1026 08:27:00.334767  196686 cri.go:89] found id: ""
	I1026 08:27:00.334803  196686 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:27:00.348588  196686 out.go:203] 
	W1026 08:27:00.349774  196686 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:27:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:27:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:27:00.349795  196686 out.go:285] * 
	* 
	W1026 08:27:00.353498  196686 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:27:00.355304  196686 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-504806 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-504806
helpers_test.go:243: (dbg) docker inspect pause-504806:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872",
	        "Created": "2025-10-26T08:26:08.165801388Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 184240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:26:08.214494293Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872/hostname",
	        "HostsPath": "/var/lib/docker/containers/a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872/hosts",
	        "LogPath": "/var/lib/docker/containers/a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872/a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872-json.log",
	        "Name": "/pause-504806",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-504806:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-504806",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872",
	                "LowerDir": "/var/lib/docker/overlay2/4b62deaefe80bbabaf2de1f39b63470a9089044928641b1fdbd228bfd9322a73-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b62deaefe80bbabaf2de1f39b63470a9089044928641b1fdbd228bfd9322a73/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b62deaefe80bbabaf2de1f39b63470a9089044928641b1fdbd228bfd9322a73/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b62deaefe80bbabaf2de1f39b63470a9089044928641b1fdbd228bfd9322a73/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-504806",
	                "Source": "/var/lib/docker/volumes/pause-504806/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-504806",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-504806",
	                "name.minikube.sigs.k8s.io": "pause-504806",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd8666daeb29f8ebfdf6b45db3377f410ad17178e84af90d0d6c3c0a2b8f4dfa",
	            "SandboxKey": "/var/run/docker/netns/cd8666daeb29",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32988"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32989"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32990"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32991"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-504806": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:88:96:12:cc:0c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9c4bbf29c6775e3fe40fd806bd3d8f14bb330c9950268cd1c9c69a7fba2c3c0f",
	                    "EndpointID": "880d0592db81cad23aac9c4ad781d4cd70f38072eb7f66cf2d1eab8a48ab7aa2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-504806",
	                        "a079a030fad6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-504806 -n pause-504806
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-504806 -n pause-504806: exit status 2 (333.595312ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-504806 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p test-preload-244810                                                                                                                   │ test-preload-244810         │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │ 26 Oct 25 08:24 UTC │
	│ start   │ -p scheduled-stop-422857 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │ 26 Oct 25 08:24 UTC │
	│ stop    │ -p scheduled-stop-422857 --schedule 5m                                                                                                   │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 5m                                                                                                   │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 5m                                                                                                   │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 15s                                                                                                  │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 15s                                                                                                  │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 15s                                                                                                  │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --cancel-scheduled                                                                                              │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │ 26 Oct 25 08:24 UTC │
	│ stop    │ -p scheduled-stop-422857 --schedule 15s                                                                                                  │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 15s                                                                                                  │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 15s                                                                                                  │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │ 26 Oct 25 08:25 UTC │
	│ delete  │ -p scheduled-stop-422857                                                                                                                 │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │ 26 Oct 25 08:25 UTC │
	│ start   │ -p insufficient-storage-232115 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-232115 │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │                     │
	│ delete  │ -p insufficient-storage-232115                                                                                                           │ insufficient-storage-232115 │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │ 26 Oct 25 08:25 UTC │
	│ start   │ -p cert-expiration-535689 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                   │ cert-expiration-535689      │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │ 26 Oct 25 08:26 UTC │
	│ start   │ -p offline-crio-486469 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-486469         │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │ 26 Oct 25 08:26 UTC │
	│ start   │ -p force-systemd-env-519045 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-519045    │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │ 26 Oct 25 08:26 UTC │
	│ start   │ -p pause-504806 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-504806                │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │ 26 Oct 25 08:26 UTC │
	│ delete  │ -p force-systemd-env-519045                                                                                                              │ force-systemd-env-519045    │ jenkins │ v1.37.0 │ 26 Oct 25 08:26 UTC │ 26 Oct 25 08:26 UTC │
	│ start   │ -p missing-upgrade-300975 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-300975      │ jenkins │ v1.32.0 │ 26 Oct 25 08:26 UTC │                     │
	│ delete  │ -p offline-crio-486469                                                                                                                   │ offline-crio-486469         │ jenkins │ v1.37.0 │ 26 Oct 25 08:26 UTC │ 26 Oct 25 08:26 UTC │
	│ start   │ -p pause-504806 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-504806                │ jenkins │ v1.37.0 │ 26 Oct 25 08:26 UTC │ 26 Oct 25 08:26 UTC │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-462840   │ jenkins │ v1.37.0 │ 26 Oct 25 08:26 UTC │                     │
	│ pause   │ -p pause-504806 --alsologtostderr -v=5                                                                                                   │ pause-504806                │ jenkins │ v1.37.0 │ 26 Oct 25 08:26 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:26:50
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:26:50.890420  194299 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:26:50.890799  194299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:26:50.890811  194299 out.go:374] Setting ErrFile to fd 2...
	I1026 08:26:50.890817  194299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:26:50.891210  194299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:26:50.891780  194299 out.go:368] Setting JSON to false
	I1026 08:26:50.892920  194299 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4162,"bootTime":1761463049,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:26:50.892975  194299 start.go:141] virtualization: kvm guest
	I1026 08:26:50.897687  194299 out.go:179] * [kubernetes-upgrade-462840] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:26:50.899432  194299 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:26:50.899456  194299 notify.go:220] Checking for updates...
	I1026 08:26:50.901852  194299 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:26:50.903725  194299 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:26:50.905082  194299 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:26:50.906624  194299 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:26:50.907727  194299 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:26:50.909671  194299 config.go:182] Loaded profile config "cert-expiration-535689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:26:50.909844  194299 config.go:182] Loaded profile config "missing-upgrade-300975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 08:26:50.910022  194299 config.go:182] Loaded profile config "pause-504806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:26:50.910157  194299 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:26:50.938628  194299 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:26:50.938793  194299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:26:51.008195  194299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-26 08:26:50.99745331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:26:51.008355  194299 docker.go:318] overlay module found
	I1026 08:26:51.010592  194299 out.go:179] * Using the docker driver based on user configuration
	I1026 08:26:51.011783  194299 start.go:305] selected driver: docker
	I1026 08:26:51.011800  194299 start.go:925] validating driver "docker" against <nil>
	I1026 08:26:51.011815  194299 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:26:51.012459  194299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:26:51.092120  194299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-26 08:26:51.080031666 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:26:51.092318  194299 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:26:51.092567  194299 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 08:26:51.094739  194299 out.go:179] * Using Docker driver with root privileges
	I1026 08:26:51.096102  194299 cni.go:84] Creating CNI manager for ""
	I1026 08:26:51.096178  194299 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:26:51.096193  194299 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 08:26:51.096321  194299 start.go:349] cluster config:
	{Name:kubernetes-upgrade-462840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-462840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:26:51.097824  194299 out.go:179] * Starting "kubernetes-upgrade-462840" primary control-plane node in "kubernetes-upgrade-462840" cluster
	I1026 08:26:51.098860  194299 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:26:51.099967  194299 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:26:51.101000  194299 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 08:26:51.101036  194299 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1026 08:26:51.101036  194299 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:26:51.101045  194299 cache.go:58] Caching tarball of preloaded images
	I1026 08:26:51.101220  194299 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:26:51.101232  194299 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1026 08:26:51.101364  194299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/config.json ...
	I1026 08:26:51.101396  194299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/config.json: {Name:mk8ee85c8e830b9f72a8f6866b6746efc897cf1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:51.124820  194299 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:26:51.124844  194299 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:26:51.124864  194299 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:26:51.124902  194299 start.go:360] acquireMachinesLock for kubernetes-upgrade-462840: {Name:mkd80f24e37729d329fe777d33e3092e56a7a873 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:26:51.125009  194299 start.go:364] duration metric: took 85.018µs to acquireMachinesLock for "kubernetes-upgrade-462840"
	I1026 08:26:51.125039  194299 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-462840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-462840 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:26:51.125121  194299 start.go:125] createHost starting for "" (driver="docker")
	I1026 08:26:50.410748  192378 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 08:26:50.411089  192378 start.go:159] libmachine.API.Create for "missing-upgrade-300975" (driver="docker")
	I1026 08:26:50.411122  192378 client.go:168] LocalClient.Create starting
	I1026 08:26:50.411213  192378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem
	I1026 08:26:50.411264  192378 main.go:141] libmachine: Decoding PEM data...
	I1026 08:26:50.411281  192378 main.go:141] libmachine: Parsing certificate...
	I1026 08:26:50.411363  192378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem
	I1026 08:26:50.411385  192378 main.go:141] libmachine: Decoding PEM data...
	I1026 08:26:50.411397  192378 main.go:141] libmachine: Parsing certificate...
	I1026 08:26:50.411831  192378 cli_runner.go:164] Run: docker network inspect missing-upgrade-300975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 08:26:50.431808  192378 cli_runner.go:211] docker network inspect missing-upgrade-300975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 08:26:50.431889  192378 network_create.go:281] running [docker network inspect missing-upgrade-300975] to gather additional debugging logs...
	I1026 08:26:50.431904  192378 cli_runner.go:164] Run: docker network inspect missing-upgrade-300975
	W1026 08:26:50.450574  192378 cli_runner.go:211] docker network inspect missing-upgrade-300975 returned with exit code 1
	I1026 08:26:50.450601  192378 network_create.go:284] error running [docker network inspect missing-upgrade-300975]: docker network inspect missing-upgrade-300975: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-300975 not found
	I1026 08:26:50.450618  192378 network_create.go:286] output of [docker network inspect missing-upgrade-300975]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-300975 not found
	
	** /stderr **
	I1026 08:26:50.450745  192378 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:26:50.469214  192378 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c18b67b7e42d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:70:41:72:e4:6d} reservation:<nil>}
	I1026 08:26:50.469727  192378 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd6ed9f615a5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:78:96:65:8c:60} reservation:<nil>}
	I1026 08:26:50.470202  192378 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2a983bf4577 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:62:ae:31:43:82} reservation:<nil>}
	I1026 08:26:50.470665  192378 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c512b29df443 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:67:8a:60:ac:da} reservation:<nil>}
	I1026 08:26:50.471131  192378 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-468fe1679ab5 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:1a:7c:9f:9c:66:a4} reservation:<nil>}
	I1026 08:26:50.471753  192378 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0025eb280}
	I1026 08:26:50.471769  192378 network_create.go:124] attempt to create docker network missing-upgrade-300975 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1026 08:26:50.471817  192378 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-300975 missing-upgrade-300975
	I1026 08:26:50.534959  192378 network_create.go:108] docker network missing-upgrade-300975 192.168.94.0/24 created
	I1026 08:26:50.534993  192378 kic.go:121] calculated static IP "192.168.94.2" for the "missing-upgrade-300975" container
	I1026 08:26:50.535077  192378 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 08:26:50.553700  192378 cli_runner.go:164] Run: docker volume create missing-upgrade-300975 --label name.minikube.sigs.k8s.io=missing-upgrade-300975 --label created_by.minikube.sigs.k8s.io=true
	I1026 08:26:50.572784  192378 oci.go:103] Successfully created a docker volume missing-upgrade-300975
	I1026 08:26:50.572861  192378 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-300975-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-300975 --entrypoint /usr/bin/test -v missing-upgrade-300975:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1026 08:26:50.993743  192378 oci.go:107] Successfully prepared a docker volume missing-upgrade-300975
	I1026 08:26:50.993773  192378 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 08:26:50.993800  192378 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 08:26:50.993878  192378 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-300975:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 08:26:48.220292  193447 out.go:252] * Updating the running docker "pause-504806" container ...
	I1026 08:26:48.220338  193447 machine.go:93] provisionDockerMachine start ...
	I1026 08:26:48.220454  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:48.243455  193447 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:48.243783  193447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1026 08:26:48.243798  193447 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:26:48.416625  193447 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-504806
	
	I1026 08:26:48.416663  193447 ubuntu.go:182] provisioning hostname "pause-504806"
	I1026 08:26:48.416725  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:48.437816  193447 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:48.438079  193447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1026 08:26:48.438100  193447 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-504806 && echo "pause-504806" | sudo tee /etc/hostname
	I1026 08:26:48.663515  193447 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-504806
	
	I1026 08:26:48.663603  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:48.684983  193447 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:48.685394  193447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1026 08:26:48.685423  193447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-504806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-504806/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-504806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:26:48.827979  193447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:26:48.828011  193447 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:26:48.828065  193447 ubuntu.go:190] setting up certificates
	I1026 08:26:48.828076  193447 provision.go:84] configureAuth start
	I1026 08:26:48.828150  193447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-504806
	I1026 08:26:48.846566  193447 provision.go:143] copyHostCerts
	I1026 08:26:48.846643  193447 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:26:48.846661  193447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:26:48.903322  193447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:26:48.903503  193447 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:26:48.903518  193447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:26:48.903564  193447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:26:48.903663  193447 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:26:48.903674  193447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:26:48.903710  193447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:26:48.903794  193447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.pause-504806 san=[127.0.0.1 192.168.103.2 localhost minikube pause-504806]
	I1026 08:26:49.089494  193447 provision.go:177] copyRemoteCerts
	I1026 08:26:49.089586  193447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:26:49.089635  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:49.108415  193447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/pause-504806/id_rsa Username:docker}
	I1026 08:26:49.209060  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:26:49.227804  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:26:49.245263  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 08:26:49.263401  193447 provision.go:87] duration metric: took 435.308206ms to configureAuth
	I1026 08:26:49.263432  193447 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:26:49.263640  193447 config.go:182] Loaded profile config "pause-504806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:26:49.263730  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:49.281508  193447 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:49.281727  193447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1026 08:26:49.281741  193447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:26:50.371946  193447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:26:50.371974  193447 machine.go:96] duration metric: took 2.151627832s to provisionDockerMachine
	I1026 08:26:50.371984  193447 start.go:293] postStartSetup for "pause-504806" (driver="docker")
	I1026 08:26:50.371994  193447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:26:50.372063  193447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:26:50.372100  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:50.392859  193447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/pause-504806/id_rsa Username:docker}
	I1026 08:26:50.501622  193447 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:26:50.505453  193447 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:26:50.505486  193447 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:26:50.505498  193447 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:26:50.505548  193447 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:26:50.505659  193447 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:26:50.505784  193447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:26:50.514022  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:26:50.532544  193447 start.go:296] duration metric: took 160.544322ms for postStartSetup
	I1026 08:26:50.532621  193447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:26:50.532695  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:50.552897  193447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/pause-504806/id_rsa Username:docker}
	I1026 08:26:50.654282  193447 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:26:50.659643  193447 fix.go:56] duration metric: took 2.509300124s for fixHost
	I1026 08:26:50.659669  193447 start.go:83] releasing machines lock for "pause-504806", held for 2.509347292s
	I1026 08:26:50.659752  193447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-504806
	I1026 08:26:50.679464  193447 ssh_runner.go:195] Run: cat /version.json
	I1026 08:26:50.679543  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:50.679556  193447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:26:50.679616  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:50.702009  193447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/pause-504806/id_rsa Username:docker}
	I1026 08:26:50.703090  193447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/pause-504806/id_rsa Username:docker}
	I1026 08:26:50.870618  193447 ssh_runner.go:195] Run: systemctl --version
	I1026 08:26:50.879330  193447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:26:50.923593  193447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:26:50.929136  193447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:26:50.929224  193447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:26:50.939331  193447 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:26:50.939355  193447 start.go:495] detecting cgroup driver to use...
	I1026 08:26:50.939396  193447 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:26:50.939438  193447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:26:50.961347  193447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:26:50.979185  193447 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:26:50.979244  193447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:26:50.998634  193447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:26:51.014525  193447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:26:51.154629  193447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:26:51.298065  193447 docker.go:234] disabling docker service ...
	I1026 08:26:51.298154  193447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:26:51.315991  193447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:26:51.332519  193447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:26:51.453319  193447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:26:51.592531  193447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:26:51.606893  193447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:26:51.671963  193447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:26:51.672065  193447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.697777  193447 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:26:51.697842  193447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.719151  193447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.729750  193447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.740587  193447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:26:51.750497  193447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.760750  193447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.770658  193447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.779969  193447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:26:51.788046  193447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:26:51.796911  193447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:26:51.956003  193447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:26:54.231337  193447 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.275300231s)
	I1026 08:26:54.231363  193447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:26:54.231415  193447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:26:54.236333  193447 start.go:563] Will wait 60s for crictl version
	I1026 08:26:54.236385  193447 ssh_runner.go:195] Run: which crictl
	I1026 08:26:54.241952  193447 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:26:54.270818  193447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:26:54.270920  193447 ssh_runner.go:195] Run: crio --version
	I1026 08:26:54.308447  193447 ssh_runner.go:195] Run: crio --version
	I1026 08:26:54.344427  193447 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:26:54.345908  193447 cli_runner.go:164] Run: docker network inspect pause-504806 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:26:54.366880  193447 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1026 08:26:54.371897  193447 kubeadm.go:883] updating cluster {Name:pause-504806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-504806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:26:54.372150  193447 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:26:54.372222  193447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:26:54.409618  193447 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:26:54.409715  193447 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:26:54.409790  193447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:26:54.444890  193447 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:26:54.444919  193447 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:26:54.444927  193447 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1026 08:26:54.445059  193447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-504806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-504806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:26:54.445157  193447 ssh_runner.go:195] Run: crio config
	I1026 08:26:54.500753  193447 cni.go:84] Creating CNI manager for ""
	I1026 08:26:54.500780  193447 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:26:54.500799  193447 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:26:54.500827  193447 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-504806 NodeName:pause-504806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:26:54.501000  193447 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-504806"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:26:54.501075  193447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:26:54.512380  193447 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:26:54.512451  193447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:26:54.521980  193447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:26:54.539414  193447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:26:54.555359  193447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1026 08:26:54.569697  193447 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:26:54.573816  193447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:26:54.730345  193447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:26:54.749593  193447 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806 for IP: 192.168.103.2
	I1026 08:26:54.749618  193447 certs.go:195] generating shared ca certs ...
	I1026 08:26:54.749638  193447 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:54.749784  193447 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:26:54.749839  193447 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:26:54.749853  193447 certs.go:257] generating profile certs ...
	I1026 08:26:54.749960  193447 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/client.key
	I1026 08:26:54.750045  193447 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/apiserver.key.50908169
	I1026 08:26:54.750101  193447 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/proxy-client.key
	I1026 08:26:54.750278  193447 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:26:54.750332  193447 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:26:54.750348  193447 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:26:54.750384  193447 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:26:54.750416  193447 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:26:54.750451  193447 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:26:54.750509  193447 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:26:54.751143  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:26:54.774061  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:26:54.804440  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:26:54.835549  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:26:54.859481  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 08:26:54.879872  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 08:26:54.901572  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:26:54.935682  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 08:26:54.958197  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:26:54.977132  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:26:54.997988  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:26:55.045773  193447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:26:55.058926  193447 ssh_runner.go:195] Run: openssl version
	I1026 08:26:55.065508  193447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:26:55.074284  193447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:26:55.079158  193447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:26:55.079216  193447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:26:55.148923  193447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:26:55.158019  193447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:26:55.167804  193447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:26:55.172195  193447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:26:55.172300  193447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:26:55.217728  193447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:26:55.228622  193447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:26:55.238769  193447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:26:55.243541  193447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:26:55.243604  193447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:26:55.282802  193447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:26:55.291510  193447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:26:55.295335  193447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:26:55.333452  193447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:26:55.372296  193447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:26:55.415092  193447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:26:55.450544  193447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:26:55.489397  193447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:26:55.525516  193447 kubeadm.go:400] StartCluster: {Name:pause-504806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-504806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:26:55.525654  193447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:26:55.525723  193447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:26:55.560924  193447 cri.go:89] found id: "4142ecdf1fe30029dbfe7b06d257a2cd3f8a1a259d6e1e656fa68fb6b6f48f60"
	I1026 08:26:55.560948  193447 cri.go:89] found id: "65936c8bb6486e4c862dabe5143e4456e412dde42e71c121cca6af8ced39b26b"
	I1026 08:26:55.560953  193447 cri.go:89] found id: "bdbccec25f128438488e48390310c61c2e866b1dd32e2b66f0c12735a239f9b0"
	I1026 08:26:55.560958  193447 cri.go:89] found id: "d6bff8cede97952be272b19ac001db58dffafae8ec651ec1949c3946e1a69f0e"
	I1026 08:26:55.560963  193447 cri.go:89] found id: "e382b82319af9e2a4edf1b892db5b91bca0282ac246cfb7c71726684226b98ec"
	I1026 08:26:55.560967  193447 cri.go:89] found id: "8d285175a1f0637dbd439b948468559b78c5baf706c85d4392df8f983fb8db67"
	I1026 08:26:55.560971  193447 cri.go:89] found id: "fcec7a37f3c1b7712f65f3f276cd8dbcc20be3d019eba5ee54f6ecb649c99cc5"
	I1026 08:26:55.560975  193447 cri.go:89] found id: ""
	I1026 08:26:55.561021  193447 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 08:26:55.573145  193447 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:26:55Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:26:55.573227  193447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:26:55.581811  193447 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:26:55.581831  193447 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:26:55.581879  193447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:26:55.590131  193447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:26:55.590724  193447 kubeconfig.go:125] found "pause-504806" server: "https://192.168.103.2:8443"
	I1026 08:26:55.591473  193447 kapi.go:59] client config for pause-504806: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/client.key", CAFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:26:55.591835  193447 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1026 08:26:55.591855  193447 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1026 08:26:55.591860  193447 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 08:26:55.591866  193447 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1026 08:26:55.591874  193447 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 08:26:55.592181  193447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:26:55.604495  193447 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1026 08:26:55.604533  193447 kubeadm.go:601] duration metric: took 22.695286ms to restartPrimaryControlPlane
	I1026 08:26:55.604542  193447 kubeadm.go:402] duration metric: took 79.039298ms to StartCluster
	I1026 08:26:55.604561  193447 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:55.604633  193447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:26:55.605721  193447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:55.605977  193447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:26:55.606045  193447 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:26:55.606329  193447 config.go:182] Loaded profile config "pause-504806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:26:55.697507  193447 out.go:179] * Verifying Kubernetes components...
	I1026 08:26:55.697518  193447 out.go:179] * Enabled addons: 
	I1026 08:26:51.127379  194299 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 08:26:51.127635  194299 start.go:159] libmachine.API.Create for "kubernetes-upgrade-462840" (driver="docker")
	I1026 08:26:51.127671  194299 client.go:168] LocalClient.Create starting
	I1026 08:26:51.127749  194299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem
	I1026 08:26:51.127792  194299 main.go:141] libmachine: Decoding PEM data...
	I1026 08:26:51.127818  194299 main.go:141] libmachine: Parsing certificate...
	I1026 08:26:51.127896  194299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem
	I1026 08:26:51.127923  194299 main.go:141] libmachine: Decoding PEM data...
	I1026 08:26:51.127937  194299 main.go:141] libmachine: Parsing certificate...
	I1026 08:26:51.128392  194299 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-462840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 08:26:51.148647  194299 cli_runner.go:211] docker network inspect kubernetes-upgrade-462840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 08:26:51.148739  194299 network_create.go:284] running [docker network inspect kubernetes-upgrade-462840] to gather additional debugging logs...
	I1026 08:26:51.148764  194299 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-462840
	W1026 08:26:51.173410  194299 cli_runner.go:211] docker network inspect kubernetes-upgrade-462840 returned with exit code 1
	I1026 08:26:51.173442  194299 network_create.go:287] error running [docker network inspect kubernetes-upgrade-462840]: docker network inspect kubernetes-upgrade-462840: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-462840 not found
	I1026 08:26:51.173458  194299 network_create.go:289] output of [docker network inspect kubernetes-upgrade-462840]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-462840 not found
	
	** /stderr **
	I1026 08:26:51.173610  194299 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:26:51.196117  194299 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c18b67b7e42d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:70:41:72:e4:6d} reservation:<nil>}
	I1026 08:26:51.196858  194299 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd6ed9f615a5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:78:96:65:8c:60} reservation:<nil>}
	I1026 08:26:51.197641  194299 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2a983bf4577 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:62:ae:31:43:82} reservation:<nil>}
	I1026 08:26:51.198471  194299 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c512b29df443 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:67:8a:60:ac:da} reservation:<nil>}
	I1026 08:26:51.199649  194299 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ff8fd0}
	I1026 08:26:51.199739  194299 network_create.go:124] attempt to create docker network kubernetes-upgrade-462840 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1026 08:26:51.199820  194299 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-462840 kubernetes-upgrade-462840
	I1026 08:26:51.269670  194299 network_create.go:108] docker network kubernetes-upgrade-462840 192.168.85.0/24 created
	I1026 08:26:51.269724  194299 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-462840" container
	I1026 08:26:51.269807  194299 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 08:26:51.291278  194299 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-462840 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-462840 --label created_by.minikube.sigs.k8s.io=true
	I1026 08:26:51.312546  194299 oci.go:103] Successfully created a docker volume kubernetes-upgrade-462840
	I1026 08:26:51.312633  194299 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-462840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-462840 --entrypoint /usr/bin/test -v kubernetes-upgrade-462840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 08:26:54.294012  194299 cli_runner.go:217] Completed: docker run --rm --name kubernetes-upgrade-462840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-462840 --entrypoint /usr/bin/test -v kubernetes-upgrade-462840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.981333348s)
	I1026 08:26:54.294043  194299 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-462840
	I1026 08:26:54.294080  194299 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 08:26:54.294102  194299 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 08:26:54.294185  194299 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-462840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 08:26:54.065050  192378 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-300975:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.071097923s)
	I1026 08:26:54.065078  192378 kic.go:203] duration metric: took 3.071278 seconds to extract preloaded images to volume
	W1026 08:26:54.065164  192378 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 08:26:54.065188  192378 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 08:26:54.065224  192378 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 08:26:54.134000  192378 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-300975 --name missing-upgrade-300975 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-300975 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-300975 --network missing-upgrade-300975 --ip 192.168.94.2 --volume missing-upgrade-300975:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1026 08:26:54.481544  192378 cli_runner.go:164] Run: docker container inspect missing-upgrade-300975 --format={{.State.Running}}
	I1026 08:26:54.503777  192378 cli_runner.go:164] Run: docker container inspect missing-upgrade-300975 --format={{.State.Status}}
	I1026 08:26:54.527782  192378 cli_runner.go:164] Run: docker exec missing-upgrade-300975 stat /var/lib/dpkg/alternatives/iptables
	I1026 08:26:54.579590  192378 oci.go:144] the created container "missing-upgrade-300975" has a running status.
	I1026 08:26:54.579615  192378 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa...
	I1026 08:26:54.755578  192378 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 08:26:54.795384  192378 cli_runner.go:164] Run: docker container inspect missing-upgrade-300975 --format={{.State.Status}}
	I1026 08:26:54.825957  192378 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 08:26:54.825979  192378 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-300975 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 08:26:54.885021  192378 cli_runner.go:164] Run: docker container inspect missing-upgrade-300975 --format={{.State.Status}}
	I1026 08:26:54.904044  192378 machine.go:88] provisioning docker machine ...
	I1026 08:26:54.904087  192378 ubuntu.go:169] provisioning hostname "missing-upgrade-300975"
	I1026 08:26:54.904157  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:54.924707  192378 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:54.934880  192378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1026 08:26:54.934903  192378 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-300975 && echo "missing-upgrade-300975" | sudo tee /etc/hostname
	I1026 08:26:55.103426  192378 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-300975
	
	I1026 08:26:55.103523  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:55.121612  192378 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:55.121943  192378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1026 08:26:55.121956  192378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-300975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-300975/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-300975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:26:55.240022  192378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:26:55.240044  192378 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:26:55.240068  192378 ubuntu.go:177] setting up certificates
	I1026 08:26:55.240081  192378 provision.go:83] configureAuth start
	I1026 08:26:55.240159  192378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-300975
	I1026 08:26:55.260591  192378 provision.go:138] copyHostCerts
	I1026 08:26:55.260644  192378 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:26:55.260651  192378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:26:55.260703  192378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:26:55.260779  192378 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:26:55.260782  192378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:26:55.260806  192378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:26:55.260857  192378 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:26:55.260860  192378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:26:55.260881  192378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:26:55.260933  192378 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-300975 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-300975]
	I1026 08:26:55.523936  192378 provision.go:172] copyRemoteCerts
	I1026 08:26:55.524010  192378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:26:55.524056  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:55.545280  192378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa Username:docker}
	I1026 08:26:55.635371  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:26:55.727529  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1026 08:26:55.769263  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:26:55.798216  192378 provision.go:86] duration metric: configureAuth took 558.119947ms
	I1026 08:26:55.798242  192378 ubuntu.go:193] setting minikube options for container-runtime
	I1026 08:26:55.798472  192378 config.go:182] Loaded profile config "missing-upgrade-300975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 08:26:55.798609  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:55.818927  192378 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:55.819426  192378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1026 08:26:55.819445  192378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:26:56.050443  192378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:26:56.050465  192378 machine.go:91] provisioned docker machine in 1.146396664s
	I1026 08:26:56.050474  192378 client.go:171] LocalClient.Create took 5.639347865s
	I1026 08:26:56.050494  192378 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-300975" took 5.63940715s
	I1026 08:26:56.050503  192378 start.go:300] post-start starting for "missing-upgrade-300975" (driver="docker")
	I1026 08:26:56.050515  192378 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:26:56.050581  192378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:26:56.050618  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:56.071745  192378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa Username:docker}
	I1026 08:26:56.163189  192378 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:26:56.167015  192378 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:26:56.167069  192378 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1026 08:26:56.167080  192378 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1026 08:26:56.167086  192378 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1026 08:26:56.167097  192378 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:26:56.167151  192378 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:26:56.167224  192378 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:26:56.167348  192378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:26:56.177268  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:26:56.210005  192378 start.go:303] post-start completed in 159.484732ms
	I1026 08:26:56.210442  192378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-300975
	I1026 08:26:56.229581  192378 profile.go:148] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/config.json ...
	I1026 08:26:56.229841  192378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:26:56.229883  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:56.248878  192378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa Username:docker}
	I1026 08:26:56.332728  192378 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:26:56.337035  192378 start.go:128] duration metric: createHost completed in 5.928199636s
	I1026 08:26:56.337049  192378 start.go:83] releasing machines lock for "missing-upgrade-300975", held for 5.92835033s
	I1026 08:26:56.337124  192378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-300975
	I1026 08:26:56.357644  192378 ssh_runner.go:195] Run: cat /version.json
	I1026 08:26:56.357712  192378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:26:56.357774  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:56.357798  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:56.377616  192378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa Username:docker}
	I1026 08:26:56.378589  192378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa Username:docker}
	I1026 08:26:56.555694  192378 ssh_runner.go:195] Run: systemctl --version
	I1026 08:26:56.560499  192378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:26:55.699736  193447 addons.go:514] duration metric: took 93.697611ms for enable addons: enabled=[]
	I1026 08:26:55.699800  193447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:26:55.849605  193447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:26:55.865153  193447 node_ready.go:35] waiting up to 6m0s for node "pause-504806" to be "Ready" ...
	I1026 08:26:55.874496  193447 node_ready.go:49] node "pause-504806" is "Ready"
	I1026 08:26:55.874526  193447 node_ready.go:38] duration metric: took 9.338002ms for node "pause-504806" to be "Ready" ...
	I1026 08:26:55.874543  193447 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:26:55.874593  193447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:26:55.888516  193447 api_server.go:72] duration metric: took 282.503468ms to wait for apiserver process to appear ...
	I1026 08:26:55.888546  193447 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:26:55.888570  193447 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 08:26:55.893345  193447 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 08:26:55.894481  193447 api_server.go:141] control plane version: v1.34.1
	I1026 08:26:55.894509  193447 api_server.go:131] duration metric: took 5.954771ms to wait for apiserver health ...
	I1026 08:26:55.894519  193447 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:26:55.898311  193447 system_pods.go:59] 7 kube-system pods found
	I1026 08:26:55.898344  193447 system_pods.go:61] "coredns-66bc5c9577-qcszn" [c3e0eaff-6a88-440e-98b7-9230b2966e07] Running
	I1026 08:26:55.898351  193447 system_pods.go:61] "etcd-pause-504806" [7e996a9f-c079-405e-8c07-e7cfa96c1c0a] Running
	I1026 08:26:55.898357  193447 system_pods.go:61] "kindnet-cjpzm" [0ee3c5e1-47fd-4318-9c90-f8eb93610ebf] Running
	I1026 08:26:55.898363  193447 system_pods.go:61] "kube-apiserver-pause-504806" [111783e2-0d07-4372-8f7f-7906dbb27b7b] Running
	I1026 08:26:55.898370  193447 system_pods.go:61] "kube-controller-manager-pause-504806" [d6770757-94b2-452d-8605-2864f08979fb] Running
	I1026 08:26:55.898375  193447 system_pods.go:61] "kube-proxy-9d7fv" [5884f8ce-f7c9-452b-b9b0-b025b0a22792] Running
	I1026 08:26:55.898381  193447 system_pods.go:61] "kube-scheduler-pause-504806" [8f8d83d3-0623-46cd-9f40-4aa50a5c7173] Running
	I1026 08:26:55.898388  193447 system_pods.go:74] duration metric: took 3.862442ms to wait for pod list to return data ...
	I1026 08:26:55.898403  193447 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:26:55.900956  193447 default_sa.go:45] found service account: "default"
	I1026 08:26:55.900979  193447 default_sa.go:55] duration metric: took 2.569042ms for default service account to be created ...
	I1026 08:26:55.900989  193447 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:26:55.903903  193447 system_pods.go:86] 7 kube-system pods found
	I1026 08:26:55.903933  193447 system_pods.go:89] "coredns-66bc5c9577-qcszn" [c3e0eaff-6a88-440e-98b7-9230b2966e07] Running
	I1026 08:26:55.903943  193447 system_pods.go:89] "etcd-pause-504806" [7e996a9f-c079-405e-8c07-e7cfa96c1c0a] Running
	I1026 08:26:55.903948  193447 system_pods.go:89] "kindnet-cjpzm" [0ee3c5e1-47fd-4318-9c90-f8eb93610ebf] Running
	I1026 08:26:55.903954  193447 system_pods.go:89] "kube-apiserver-pause-504806" [111783e2-0d07-4372-8f7f-7906dbb27b7b] Running
	I1026 08:26:55.903960  193447 system_pods.go:89] "kube-controller-manager-pause-504806" [d6770757-94b2-452d-8605-2864f08979fb] Running
	I1026 08:26:55.903974  193447 system_pods.go:89] "kube-proxy-9d7fv" [5884f8ce-f7c9-452b-b9b0-b025b0a22792] Running
	I1026 08:26:55.903980  193447 system_pods.go:89] "kube-scheduler-pause-504806" [8f8d83d3-0623-46cd-9f40-4aa50a5c7173] Running
	I1026 08:26:55.903989  193447 system_pods.go:126] duration metric: took 2.992978ms to wait for k8s-apps to be running ...
	I1026 08:26:55.904004  193447 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:26:55.904051  193447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:26:55.918664  193447 system_svc.go:56] duration metric: took 14.654221ms WaitForService to wait for kubelet
	I1026 08:26:55.918695  193447 kubeadm.go:586] duration metric: took 312.686511ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:26:55.918719  193447 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:26:55.921686  193447 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:26:55.921718  193447 node_conditions.go:123] node cpu capacity is 8
	I1026 08:26:55.921728  193447 node_conditions.go:105] duration metric: took 3.00404ms to run NodePressure ...
	I1026 08:26:55.921739  193447 start.go:241] waiting for startup goroutines ...
	I1026 08:26:55.921745  193447 start.go:246] waiting for cluster config update ...
	I1026 08:26:55.921752  193447 start.go:255] writing updated cluster config ...
	I1026 08:26:55.921991  193447 ssh_runner.go:195] Run: rm -f paused
	I1026 08:26:55.926116  193447 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:26:55.926722  193447 kapi.go:59] client config for pause-504806: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/client.key", CAFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:26:55.929748  193447 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qcszn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:55.934034  193447 pod_ready.go:94] pod "coredns-66bc5c9577-qcszn" is "Ready"
	I1026 08:26:55.934061  193447 pod_ready.go:86] duration metric: took 4.289515ms for pod "coredns-66bc5c9577-qcszn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:55.936054  193447 pod_ready.go:83] waiting for pod "etcd-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:55.940341  193447 pod_ready.go:94] pod "etcd-pause-504806" is "Ready"
	I1026 08:26:55.940367  193447 pod_ready.go:86] duration metric: took 4.289908ms for pod "etcd-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:55.942244  193447 pod_ready.go:83] waiting for pod "kube-apiserver-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:55.946698  193447 pod_ready.go:94] pod "kube-apiserver-pause-504806" is "Ready"
	I1026 08:26:55.946721  193447 pod_ready.go:86] duration metric: took 4.44185ms for pod "kube-apiserver-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:55.948826  193447 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:56.330033  193447 pod_ready.go:94] pod "kube-controller-manager-pause-504806" is "Ready"
	I1026 08:26:56.330070  193447 pod_ready.go:86] duration metric: took 381.221119ms for pod "kube-controller-manager-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:56.530117  193447 pod_ready.go:83] waiting for pod "kube-proxy-9d7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:56.931217  193447 pod_ready.go:94] pod "kube-proxy-9d7fv" is "Ready"
	I1026 08:26:56.931267  193447 pod_ready.go:86] duration metric: took 401.126477ms for pod "kube-proxy-9d7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:57.130789  193447 pod_ready.go:83] waiting for pod "kube-scheduler-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:57.529605  193447 pod_ready.go:94] pod "kube-scheduler-pause-504806" is "Ready"
	I1026 08:26:57.529632  193447 pod_ready.go:86] duration metric: took 398.812394ms for pod "kube-scheduler-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:57.529646  193447 pod_ready.go:40] duration metric: took 1.603499393s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:26:57.573707  193447 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:26:57.606645  193447 out.go:179] * Done! kubectl is now configured to use "pause-504806" cluster and "default" namespace by default
	I1026 08:26:56.702682  192378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 08:26:56.707334  192378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:26:56.732157  192378 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1026 08:26:56.732239  192378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:26:56.764868  192378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1026 08:26:56.764886  192378 start.go:472] detecting cgroup driver to use...
	I1026 08:26:56.764924  192378 detect.go:199] detected "systemd" cgroup driver on host os
	I1026 08:26:56.765051  192378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:26:56.781999  192378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:26:56.794411  192378 docker.go:203] disabling cri-docker service (if available) ...
	I1026 08:26:56.794456  192378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:26:56.810551  192378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:26:56.827230  192378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:26:56.900863  192378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:26:56.982929  192378 docker.go:219] disabling docker service ...
	I1026 08:26:56.982994  192378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:26:57.001649  192378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:26:57.013594  192378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:26:57.088338  192378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:26:57.292705  192378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:26:57.305150  192378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:26:57.323022  192378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 08:26:57.323078  192378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:57.339517  192378 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:26:57.339571  192378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:57.351281  192378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:57.361861  192378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:57.383739  192378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:26:57.402108  192378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:26:57.413391  192378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:26:57.423459  192378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:26:57.488996  192378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:26:58.996072  192378 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.507042374s)
	I1026 08:26:58.996103  192378 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:26:58.996157  192378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:26:59.000693  192378 start.go:540] Will wait 60s for crictl version
	I1026 08:26:59.000762  192378 ssh_runner.go:195] Run: which crictl
	I1026 08:26:59.005279  192378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 08:26:59.047366  192378 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1026 08:26:59.047462  192378 ssh_runner.go:195] Run: crio --version
	I1026 08:26:59.084072  192378 ssh_runner.go:195] Run: crio --version
	I1026 08:26:59.126693  192378 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	
	
	==> CRI-O <==
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.15540385Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.156302443Z" level=info msg="Conmon does support the --sync option"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.156324712Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.15634332Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.157216654Z" level=info msg="Conmon does support the --sync option"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.157237385Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.161619028Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.161652248Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.162421719Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.162986006Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.16305232Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.169402883Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.225688677Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-qcszn Namespace:kube-system ID:af96428e49f357c491cdb5ba06ae603f3f368988f3756bf4df4544b02c993719 UID:c3e0eaff-6a88-440e-98b7-9230b2966e07 NetNS:/var/run/netns/adbde4ba-90c8-4abf-b098-d89ccbbbe432 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132740}] Aliases:map[]}"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.225891643Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-qcszn for CNI network kindnet (type=ptp)"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.22643727Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226469799Z" level=info msg="Starting seccomp notifier watcher"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226528891Z" level=info msg="Create NRI interface"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226666918Z" level=info msg="built-in NRI default validator is disabled"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226684014Z" level=info msg="runtime interface created"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226697494Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226706825Z" level=info msg="runtime interface starting up..."
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226713977Z" level=info msg="starting plugins..."
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226728877Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.227327011Z" level=info msg="No systemd watchdog enabled"
	Oct 26 08:26:54 pause-504806 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4142ecdf1fe30       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   17 seconds ago      Running             coredns                   0                   af96428e49f35       coredns-66bc5c9577-qcszn               kube-system
	65936c8bb6486       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   29 seconds ago      Running             kube-proxy                0                   1352f2732c108       kube-proxy-9d7fv                       kube-system
	bdbccec25f128       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   29 seconds ago      Running             kindnet-cni               0                   82ec617f17e3a       kindnet-cjpzm                          kube-system
	d6bff8cede979       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   40 seconds ago      Running             kube-apiserver            0                   ae5517cc6556e       kube-apiserver-pause-504806            kube-system
	e382b82319af9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   40 seconds ago      Running             kube-controller-manager   0                   54dec7c674d2a       kube-controller-manager-pause-504806   kube-system
	8d285175a1f06       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   40 seconds ago      Running             kube-scheduler            0                   117c4d06d9e91       kube-scheduler-pause-504806            kube-system
	fcec7a37f3c1b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   40 seconds ago      Running             etcd                      0                   24e09776faef7       etcd-pause-504806                      kube-system
	
	
	==> coredns [4142ecdf1fe30029dbfe7b06d257a2cd3f8a1a259d6e1e656fa68fb6b6f48f60] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42587 - 33191 "HINFO IN 4207833582110556143.1016658930209020495. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018687599s
	
	
	==> describe nodes <==
	Name:               pause-504806
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-504806
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=pause-504806
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_26_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:26:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-504806
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:26:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:26:43 +0000   Sun, 26 Oct 2025 08:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:26:43 +0000   Sun, 26 Oct 2025 08:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:26:43 +0000   Sun, 26 Oct 2025 08:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:26:43 +0000   Sun, 26 Oct 2025 08:26:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-504806
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                34f52b80-d738-4d86-b17a-bcff33c913fb
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-qcszn                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     30s
	  kube-system                 etcd-pause-504806                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         35s
	  kube-system                 kindnet-cjpzm                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      30s
	  kube-system                 kube-apiserver-pause-504806             250m (3%)     0 (0%)      0 (0%)           0 (0%)         35s
	  kube-system                 kube-controller-manager-pause-504806    200m (2%)     0 (0%)      0 (0%)           0 (0%)         36s
	  kube-system                 kube-proxy-9d7fv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-scheduler-pause-504806             100m (1%)     0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  Starting                 41s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  41s (x8 over 41s)  kubelet          Node pause-504806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    41s (x8 over 41s)  kubelet          Node pause-504806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     41s (x8 over 41s)  kubelet          Node pause-504806 status is now: NodeHasSufficientPID
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s                kubelet          Node pause-504806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s                kubelet          Node pause-504806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s                kubelet          Node pause-504806 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node pause-504806 event: Registered Node pause-504806 in Controller
	  Normal  NodeReady                18s                kubelet          Node pause-504806 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [fcec7a37f3c1b7712f65f3f276cd8dbcc20be3d019eba5ee54f6ecb649c99cc5] <==
	{"level":"warn","ts":"2025-10-26T08:26:22.713529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.724309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.736339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.747889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.756681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.774884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.790328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.804675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.817423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.831228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.836279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.849552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.858540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.868753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.893440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.904741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.913712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.922436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.940691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.947275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.958584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.982299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:23.051822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56930","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T08:26:43.402113Z","caller":"traceutil/trace.go:172","msg":"trace[1296222753] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"110.889346ms","start":"2025-10-26T08:26:43.291203Z","end":"2025-10-26T08:26:43.402093Z","steps":["trace[1296222753] 'process raft request'  (duration: 110.749714ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:26:43.504013Z","caller":"traceutil/trace.go:172","msg":"trace[1699892003] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"208.551964ms","start":"2025-10-26T08:26:43.295441Z","end":"2025-10-26T08:26:43.503993Z","steps":["trace[1699892003] 'process raft request'  (duration: 208.390223ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:27:01 up  1:09,  0 user,  load average: 5.25, 2.26, 1.40
	Linux pause-504806 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bdbccec25f128438488e48390310c61c2e866b1dd32e2b66f0c12735a239f9b0] <==
	I1026 08:26:32.532830       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:26:32.533314       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1026 08:26:32.533467       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:26:32.533486       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:26:32.533506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:26:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:26:32.739784       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:26:32.739810       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:26:32.739822       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:26:32.740115       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:26:33.103684       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:26:33.103726       1 metrics.go:72] Registering metrics
	I1026 08:26:33.103800       1 controller.go:711] "Syncing nftables rules"
	I1026 08:26:42.740289       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:26:42.740379       1 main.go:301] handling current node
	I1026 08:26:52.743818       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:26:52.743853       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d6bff8cede97952be272b19ac001db58dffafae8ec651ec1949c3946e1a69f0e] <==
	I1026 08:26:23.849494       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 08:26:23.849540       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 08:26:23.849739       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 08:26:23.850566       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 08:26:23.856564       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:26:23.858010       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:26:23.865638       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:26:23.866007       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 08:26:24.752023       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 08:26:24.755605       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 08:26:24.755624       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:26:25.284751       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:26:25.352015       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:26:25.460531       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 08:26:25.473134       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1026 08:26:25.474487       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:26:25.482887       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:26:25.781386       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:26:26.510717       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:26:26.521831       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 08:26:26.530279       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 08:26:31.535954       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:26:31.549342       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:26:31.584897       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:26:31.881935       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [e382b82319af9e2a4edf1b892db5b91bca0282ac246cfb7c71726684226b98ec] <==
	I1026 08:26:30.733886       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 08:26:30.734574       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:26:30.743002       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-504806" podCIDRs=["10.244.0.0/24"]
	I1026 08:26:30.743169       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 08:26:30.751150       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 08:26:30.777360       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 08:26:30.778441       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 08:26:30.778462       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 08:26:30.778504       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 08:26:30.778532       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 08:26:30.778639       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 08:26:30.778668       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 08:26:30.778679       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 08:26:30.780924       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 08:26:30.785131       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 08:26:30.785159       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:26:30.785185       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:26:30.787481       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 08:26:30.787618       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 08:26:30.796196       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 08:26:30.811869       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:26:30.827550       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:26:30.827569       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 08:26:30.827574       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 08:26:45.968789       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [65936c8bb6486e4c862dabe5143e4456e412dde42e71c121cca6af8ced39b26b] <==
	I1026 08:26:32.330923       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:26:32.398303       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:26:32.499188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:26:32.499225       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1026 08:26:32.499334       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:26:32.520561       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:26:32.520623       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:26:32.527090       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:26:32.527614       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:26:32.527667       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:26:32.529300       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:26:32.529332       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:26:32.529347       1 config.go:200] "Starting service config controller"
	I1026 08:26:32.529353       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:26:32.529399       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:26:32.529409       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:26:32.529430       1 config.go:309] "Starting node config controller"
	I1026 08:26:32.529443       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:26:32.529450       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:26:32.630268       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:26:32.630308       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 08:26:32.630308       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8d285175a1f0637dbd439b948468559b78c5baf706c85d4392df8f983fb8db67] <==
	I1026 08:26:24.360029       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:26:24.362042       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:26:24.362077       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:26:24.362453       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 08:26:24.362526       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1026 08:26:24.364419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 08:26:24.364538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 08:26:24.365620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:26:24.368276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:26:24.368526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 08:26:24.368553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:26:24.368642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:26:24.368653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 08:26:24.368702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 08:26:24.368717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:26:24.368736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:26:24.368798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:26:24.368815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:26:24.368820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:26:24.368833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 08:26:24.368908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:26:24.368926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 08:26:24.368989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:26:24.368933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1026 08:26:25.962193       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:26:52 pause-504806 kubelet[1290]: E1026 08:26:52.389173    1290 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:52 pause-504806 kubelet[1290]: E1026 08:26:52.389186    1290 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:52 pause-504806 kubelet[1290]: E1026 08:26:52.464636    1290 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 26 08:26:52 pause-504806 kubelet[1290]: E1026 08:26:52.464713    1290 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:52 pause-504806 kubelet[1290]: E1026 08:26:52.464732    1290 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:52 pause-504806 kubelet[1290]: W1026 08:26:52.490011    1290 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 08:26:52 pause-504806 kubelet[1290]: W1026 08:26:52.618936    1290 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 08:26:52 pause-504806 kubelet[1290]: W1026 08:26:52.868400    1290 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 08:26:53 pause-504806 kubelet[1290]: W1026 08:26:53.264921    1290 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 08:26:53 pause-504806 kubelet[1290]: E1026 08:26:53.465181    1290 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 26 08:26:53 pause-504806 kubelet[1290]: E1026 08:26:53.465237    1290 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:53 pause-504806 kubelet[1290]: E1026 08:26:53.465278    1290 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:53 pause-504806 kubelet[1290]: W1026 08:26:53.851111    1290 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.388237    1290 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.388372    1290 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.388392    1290 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.388404    1290 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.466418    1290 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.466512    1290 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.466544    1290 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:58 pause-504806 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 08:26:58 pause-504806 kubelet[1290]: I1026 08:26:58.326139    1290 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 26 08:26:58 pause-504806 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 08:26:58 pause-504806 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 08:26:58 pause-504806 systemd[1]: kubelet.service: Consumed 1.320s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-504806 -n pause-504806
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-504806 -n pause-504806: exit status 2 (318.327507ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-504806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-504806
helpers_test.go:243: (dbg) docker inspect pause-504806:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872",
	        "Created": "2025-10-26T08:26:08.165801388Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 184240,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:26:08.214494293Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872/hostname",
	        "HostsPath": "/var/lib/docker/containers/a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872/hosts",
	        "LogPath": "/var/lib/docker/containers/a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872/a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872-json.log",
	        "Name": "/pause-504806",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-504806:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-504806",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a079a030fad673829f32a183921d7ee9d33fe9a7c35259cd8bd105dce82e0872",
	                "LowerDir": "/var/lib/docker/overlay2/4b62deaefe80bbabaf2de1f39b63470a9089044928641b1fdbd228bfd9322a73-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4b62deaefe80bbabaf2de1f39b63470a9089044928641b1fdbd228bfd9322a73/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4b62deaefe80bbabaf2de1f39b63470a9089044928641b1fdbd228bfd9322a73/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4b62deaefe80bbabaf2de1f39b63470a9089044928641b1fdbd228bfd9322a73/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-504806",
	                "Source": "/var/lib/docker/volumes/pause-504806/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-504806",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-504806",
	                "name.minikube.sigs.k8s.io": "pause-504806",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd8666daeb29f8ebfdf6b45db3377f410ad17178e84af90d0d6c3c0a2b8f4dfa",
	            "SandboxKey": "/var/run/docker/netns/cd8666daeb29",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32988"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32989"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32992"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32990"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32991"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-504806": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "46:88:96:12:cc:0c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "9c4bbf29c6775e3fe40fd806bd3d8f14bb330c9950268cd1c9c69a7fba2c3c0f",
	                    "EndpointID": "880d0592db81cad23aac9c4ad781d4cd70f38072eb7f66cf2d1eab8a48ab7aa2",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-504806",
	                        "a079a030fad6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-504806 -n pause-504806
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-504806 -n pause-504806: exit status 2 (320.232761ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-504806 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                   ARGS                                                                   │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p test-preload-244810                                                                                                                   │ test-preload-244810         │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │ 26 Oct 25 08:24 UTC │
	│ start   │ -p scheduled-stop-422857 --memory=3072 --driver=docker  --container-runtime=crio                                                         │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │ 26 Oct 25 08:24 UTC │
	│ stop    │ -p scheduled-stop-422857 --schedule 5m                                                                                                   │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 5m                                                                                                   │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 5m                                                                                                   │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 15s                                                                                                  │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 15s                                                                                                  │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 15s                                                                                                  │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --cancel-scheduled                                                                                              │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │ 26 Oct 25 08:24 UTC │
	│ stop    │ -p scheduled-stop-422857 --schedule 15s                                                                                                  │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 15s                                                                                                  │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │                     │
	│ stop    │ -p scheduled-stop-422857 --schedule 15s                                                                                                  │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:24 UTC │ 26 Oct 25 08:25 UTC │
	│ delete  │ -p scheduled-stop-422857                                                                                                                 │ scheduled-stop-422857       │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │ 26 Oct 25 08:25 UTC │
	│ start   │ -p insufficient-storage-232115 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio                         │ insufficient-storage-232115 │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │                     │
	│ delete  │ -p insufficient-storage-232115                                                                                                           │ insufficient-storage-232115 │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │ 26 Oct 25 08:25 UTC │
	│ start   │ -p cert-expiration-535689 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                   │ cert-expiration-535689      │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │ 26 Oct 25 08:26 UTC │
	│ start   │ -p offline-crio-486469 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio                        │ offline-crio-486469         │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │ 26 Oct 25 08:26 UTC │
	│ start   │ -p force-systemd-env-519045 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                               │ force-systemd-env-519045    │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │ 26 Oct 25 08:26 UTC │
	│ start   │ -p pause-504806 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                │ pause-504806                │ jenkins │ v1.37.0 │ 26 Oct 25 08:25 UTC │ 26 Oct 25 08:26 UTC │
	│ delete  │ -p force-systemd-env-519045                                                                                                              │ force-systemd-env-519045    │ jenkins │ v1.37.0 │ 26 Oct 25 08:26 UTC │ 26 Oct 25 08:26 UTC │
	│ start   │ -p missing-upgrade-300975 --memory=3072 --driver=docker  --container-runtime=crio                                                        │ missing-upgrade-300975      │ jenkins │ v1.32.0 │ 26 Oct 25 08:26 UTC │                     │
	│ delete  │ -p offline-crio-486469                                                                                                                   │ offline-crio-486469         │ jenkins │ v1.37.0 │ 26 Oct 25 08:26 UTC │ 26 Oct 25 08:26 UTC │
	│ start   │ -p pause-504806 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                         │ pause-504806                │ jenkins │ v1.37.0 │ 26 Oct 25 08:26 UTC │ 26 Oct 25 08:26 UTC │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio │ kubernetes-upgrade-462840   │ jenkins │ v1.37.0 │ 26 Oct 25 08:26 UTC │                     │
	│ pause   │ -p pause-504806 --alsologtostderr -v=5                                                                                                   │ pause-504806                │ jenkins │ v1.37.0 │ 26 Oct 25 08:26 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:26:50
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:26:50.890420  194299 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:26:50.890799  194299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:26:50.890811  194299 out.go:374] Setting ErrFile to fd 2...
	I1026 08:26:50.890817  194299 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:26:50.891210  194299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:26:50.891780  194299 out.go:368] Setting JSON to false
	I1026 08:26:50.892920  194299 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4162,"bootTime":1761463049,"procs":288,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:26:50.892975  194299 start.go:141] virtualization: kvm guest
	I1026 08:26:50.897687  194299 out.go:179] * [kubernetes-upgrade-462840] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:26:50.899432  194299 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:26:50.899456  194299 notify.go:220] Checking for updates...
	I1026 08:26:50.901852  194299 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:26:50.903725  194299 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:26:50.905082  194299 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:26:50.906624  194299 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:26:50.907727  194299 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:26:50.909671  194299 config.go:182] Loaded profile config "cert-expiration-535689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:26:50.909844  194299 config.go:182] Loaded profile config "missing-upgrade-300975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 08:26:50.910022  194299 config.go:182] Loaded profile config "pause-504806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:26:50.910157  194299 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:26:50.938628  194299 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:26:50.938793  194299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:26:51.008195  194299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:false NGoroutines:67 SystemTime:2025-10-26 08:26:50.99745331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:26:51.008355  194299 docker.go:318] overlay module found
	I1026 08:26:51.010592  194299 out.go:179] * Using the docker driver based on user configuration
	I1026 08:26:51.011783  194299 start.go:305] selected driver: docker
	I1026 08:26:51.011800  194299 start.go:925] validating driver "docker" against <nil>
	I1026 08:26:51.011815  194299 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:26:51.012459  194299 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:26:51.092120  194299 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:61 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-26 08:26:51.080031666 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:26:51.092318  194299 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:26:51.092567  194299 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 08:26:51.094739  194299 out.go:179] * Using Docker driver with root privileges
	I1026 08:26:51.096102  194299 cni.go:84] Creating CNI manager for ""
	I1026 08:26:51.096178  194299 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:26:51.096193  194299 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 08:26:51.096321  194299 start.go:349] cluster config:
	{Name:kubernetes-upgrade-462840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-462840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:26:51.097824  194299 out.go:179] * Starting "kubernetes-upgrade-462840" primary control-plane node in "kubernetes-upgrade-462840" cluster
	I1026 08:26:51.098860  194299 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:26:51.099967  194299 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:26:51.101000  194299 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 08:26:51.101036  194299 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1026 08:26:51.101036  194299 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:26:51.101045  194299 cache.go:58] Caching tarball of preloaded images
	I1026 08:26:51.101220  194299 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:26:51.101232  194299 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1026 08:26:51.101364  194299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/config.json ...
	I1026 08:26:51.101396  194299 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/config.json: {Name:mk8ee85c8e830b9f72a8f6866b6746efc897cf1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:51.124820  194299 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:26:51.124844  194299 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:26:51.124864  194299 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:26:51.124902  194299 start.go:360] acquireMachinesLock for kubernetes-upgrade-462840: {Name:mkd80f24e37729d329fe777d33e3092e56a7a873 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:26:51.125009  194299 start.go:364] duration metric: took 85.018µs to acquireMachinesLock for "kubernetes-upgrade-462840"
	I1026 08:26:51.125039  194299 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-462840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-462840 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:26:51.125121  194299 start.go:125] createHost starting for "" (driver="docker")
	I1026 08:26:50.410748  192378 out.go:204] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 08:26:50.411089  192378 start.go:159] libmachine.API.Create for "missing-upgrade-300975" (driver="docker")
	I1026 08:26:50.411122  192378 client.go:168] LocalClient.Create starting
	I1026 08:26:50.411213  192378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem
	I1026 08:26:50.411264  192378 main.go:141] libmachine: Decoding PEM data...
	I1026 08:26:50.411281  192378 main.go:141] libmachine: Parsing certificate...
	I1026 08:26:50.411363  192378 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem
	I1026 08:26:50.411385  192378 main.go:141] libmachine: Decoding PEM data...
	I1026 08:26:50.411397  192378 main.go:141] libmachine: Parsing certificate...
	I1026 08:26:50.411831  192378 cli_runner.go:164] Run: docker network inspect missing-upgrade-300975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 08:26:50.431808  192378 cli_runner.go:211] docker network inspect missing-upgrade-300975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 08:26:50.431889  192378 network_create.go:281] running [docker network inspect missing-upgrade-300975] to gather additional debugging logs...
	I1026 08:26:50.431904  192378 cli_runner.go:164] Run: docker network inspect missing-upgrade-300975
	W1026 08:26:50.450574  192378 cli_runner.go:211] docker network inspect missing-upgrade-300975 returned with exit code 1
	I1026 08:26:50.450601  192378 network_create.go:284] error running [docker network inspect missing-upgrade-300975]: docker network inspect missing-upgrade-300975: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network missing-upgrade-300975 not found
	I1026 08:26:50.450618  192378 network_create.go:286] output of [docker network inspect missing-upgrade-300975]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network missing-upgrade-300975 not found
	
	** /stderr **
	I1026 08:26:50.450745  192378 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:26:50.469214  192378 network.go:214] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c18b67b7e42d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:70:41:72:e4:6d} reservation:<nil>}
	I1026 08:26:50.469727  192378 network.go:214] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd6ed9f615a5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:78:96:65:8c:60} reservation:<nil>}
	I1026 08:26:50.470202  192378 network.go:214] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2a983bf4577 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:62:ae:31:43:82} reservation:<nil>}
	I1026 08:26:50.470665  192378 network.go:214] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c512b29df443 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:67:8a:60:ac:da} reservation:<nil>}
	I1026 08:26:50.471131  192378 network.go:214] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-468fe1679ab5 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:1a:7c:9f:9c:66:a4} reservation:<nil>}
	I1026 08:26:50.471753  192378 network.go:209] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0025eb280}
	I1026 08:26:50.471769  192378 network_create.go:124] attempt to create docker network missing-upgrade-300975 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1026 08:26:50.471817  192378 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=missing-upgrade-300975 missing-upgrade-300975
	I1026 08:26:50.534959  192378 network_create.go:108] docker network missing-upgrade-300975 192.168.94.0/24 created
	I1026 08:26:50.534993  192378 kic.go:121] calculated static IP "192.168.94.2" for the "missing-upgrade-300975" container
	I1026 08:26:50.535077  192378 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 08:26:50.553700  192378 cli_runner.go:164] Run: docker volume create missing-upgrade-300975 --label name.minikube.sigs.k8s.io=missing-upgrade-300975 --label created_by.minikube.sigs.k8s.io=true
	I1026 08:26:50.572784  192378 oci.go:103] Successfully created a docker volume missing-upgrade-300975
	I1026 08:26:50.572861  192378 cli_runner.go:164] Run: docker run --rm --name missing-upgrade-300975-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-300975 --entrypoint /usr/bin/test -v missing-upgrade-300975:/var gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -d /var/lib
	I1026 08:26:50.993743  192378 oci.go:107] Successfully prepared a docker volume missing-upgrade-300975
	I1026 08:26:50.993773  192378 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 08:26:50.993800  192378 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 08:26:50.993878  192378 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-300975:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 08:26:48.220292  193447 out.go:252] * Updating the running docker "pause-504806" container ...
	I1026 08:26:48.220338  193447 machine.go:93] provisionDockerMachine start ...
	I1026 08:26:48.220454  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:48.243455  193447 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:48.243783  193447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1026 08:26:48.243798  193447 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:26:48.416625  193447 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-504806
	
	I1026 08:26:48.416663  193447 ubuntu.go:182] provisioning hostname "pause-504806"
	I1026 08:26:48.416725  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:48.437816  193447 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:48.438079  193447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1026 08:26:48.438100  193447 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-504806 && echo "pause-504806" | sudo tee /etc/hostname
	I1026 08:26:48.663515  193447 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-504806
	
	I1026 08:26:48.663603  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:48.684983  193447 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:48.685394  193447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1026 08:26:48.685423  193447 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-504806' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-504806/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-504806' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:26:48.827979  193447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:26:48.828011  193447 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:26:48.828065  193447 ubuntu.go:190] setting up certificates
	I1026 08:26:48.828076  193447 provision.go:84] configureAuth start
	I1026 08:26:48.828150  193447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-504806
	I1026 08:26:48.846566  193447 provision.go:143] copyHostCerts
	I1026 08:26:48.846643  193447 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:26:48.846661  193447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:26:48.903322  193447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:26:48.903503  193447 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:26:48.903518  193447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:26:48.903564  193447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:26:48.903663  193447 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:26:48.903674  193447 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:26:48.903710  193447 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:26:48.903794  193447 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.pause-504806 san=[127.0.0.1 192.168.103.2 localhost minikube pause-504806]
	I1026 08:26:49.089494  193447 provision.go:177] copyRemoteCerts
	I1026 08:26:49.089586  193447 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:26:49.089635  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:49.108415  193447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/pause-504806/id_rsa Username:docker}
	I1026 08:26:49.209060  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:26:49.227804  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1026 08:26:49.245263  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 08:26:49.263401  193447 provision.go:87] duration metric: took 435.308206ms to configureAuth
	I1026 08:26:49.263432  193447 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:26:49.263640  193447 config.go:182] Loaded profile config "pause-504806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:26:49.263730  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:49.281508  193447 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:49.281727  193447 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32988 <nil> <nil>}
	I1026 08:26:49.281741  193447 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:26:50.371946  193447 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:26:50.371974  193447 machine.go:96] duration metric: took 2.151627832s to provisionDockerMachine
	I1026 08:26:50.371984  193447 start.go:293] postStartSetup for "pause-504806" (driver="docker")
	I1026 08:26:50.371994  193447 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:26:50.372063  193447 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:26:50.372100  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:50.392859  193447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/pause-504806/id_rsa Username:docker}
	I1026 08:26:50.501622  193447 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:26:50.505453  193447 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:26:50.505486  193447 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:26:50.505498  193447 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:26:50.505548  193447 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:26:50.505659  193447 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:26:50.505784  193447 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:26:50.514022  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:26:50.532544  193447 start.go:296] duration metric: took 160.544322ms for postStartSetup
	I1026 08:26:50.532621  193447 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:26:50.532695  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:50.552897  193447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/pause-504806/id_rsa Username:docker}
	I1026 08:26:50.654282  193447 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:26:50.659643  193447 fix.go:56] duration metric: took 2.509300124s for fixHost
	I1026 08:26:50.659669  193447 start.go:83] releasing machines lock for "pause-504806", held for 2.509347292s
	I1026 08:26:50.659752  193447 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-504806
	I1026 08:26:50.679464  193447 ssh_runner.go:195] Run: cat /version.json
	I1026 08:26:50.679543  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:50.679556  193447 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:26:50.679616  193447 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-504806
	I1026 08:26:50.702009  193447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/pause-504806/id_rsa Username:docker}
	I1026 08:26:50.703090  193447 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32988 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/pause-504806/id_rsa Username:docker}
	I1026 08:26:50.870618  193447 ssh_runner.go:195] Run: systemctl --version
	I1026 08:26:50.879330  193447 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:26:50.923593  193447 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:26:50.929136  193447 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:26:50.929224  193447 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:26:50.939331  193447 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:26:50.939355  193447 start.go:495] detecting cgroup driver to use...
	I1026 08:26:50.939396  193447 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:26:50.939438  193447 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:26:50.961347  193447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:26:50.979185  193447 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:26:50.979244  193447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:26:50.998634  193447 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:26:51.014525  193447 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:26:51.154629  193447 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:26:51.298065  193447 docker.go:234] disabling docker service ...
	I1026 08:26:51.298154  193447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:26:51.315991  193447 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:26:51.332519  193447 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:26:51.453319  193447 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:26:51.592531  193447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:26:51.606893  193447 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:26:51.671963  193447 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:26:51.672065  193447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.697777  193447 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:26:51.697842  193447 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.719151  193447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.729750  193447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.740587  193447 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:26:51.750497  193447 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.760750  193447 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.770658  193447 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:51.779969  193447 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:26:51.788046  193447 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:26:51.796911  193447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:26:51.956003  193447 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:26:54.231337  193447 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.275300231s)
	I1026 08:26:54.231363  193447 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:26:54.231415  193447 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:26:54.236333  193447 start.go:563] Will wait 60s for crictl version
	I1026 08:26:54.236385  193447 ssh_runner.go:195] Run: which crictl
	I1026 08:26:54.241952  193447 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:26:54.270818  193447 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:26:54.270920  193447 ssh_runner.go:195] Run: crio --version
	I1026 08:26:54.308447  193447 ssh_runner.go:195] Run: crio --version
	I1026 08:26:54.344427  193447 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:26:54.345908  193447 cli_runner.go:164] Run: docker network inspect pause-504806 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:26:54.366880  193447 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1026 08:26:54.371897  193447 kubeadm.go:883] updating cluster {Name:pause-504806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-504806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regis
try-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:26:54.372150  193447 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:26:54.372222  193447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:26:54.409618  193447 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:26:54.409715  193447 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:26:54.409790  193447 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:26:54.444890  193447 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:26:54.444919  193447 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:26:54.444927  193447 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1026 08:26:54.445059  193447 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-504806 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-504806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:26:54.445157  193447 ssh_runner.go:195] Run: crio config
	I1026 08:26:54.500753  193447 cni.go:84] Creating CNI manager for ""
	I1026 08:26:54.500780  193447 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:26:54.500799  193447 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:26:54.500827  193447 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-504806 NodeName:pause-504806 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:26:54.501000  193447 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-504806"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:26:54.501075  193447 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:26:54.512380  193447 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:26:54.512451  193447 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:26:54.521980  193447 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1026 08:26:54.539414  193447 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:26:54.555359  193447 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1026 08:26:54.569697  193447 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:26:54.573816  193447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:26:54.730345  193447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:26:54.749593  193447 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806 for IP: 192.168.103.2
	I1026 08:26:54.749618  193447 certs.go:195] generating shared ca certs ...
	I1026 08:26:54.749638  193447 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:54.749784  193447 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:26:54.749839  193447 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:26:54.749853  193447 certs.go:257] generating profile certs ...
	I1026 08:26:54.749960  193447 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/client.key
	I1026 08:26:54.750045  193447 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/apiserver.key.50908169
	I1026 08:26:54.750101  193447 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/proxy-client.key
	I1026 08:26:54.750278  193447 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:26:54.750332  193447 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:26:54.750348  193447 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:26:54.750384  193447 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:26:54.750416  193447 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:26:54.750451  193447 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:26:54.750509  193447 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:26:54.751143  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:26:54.774061  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:26:54.804440  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:26:54.835549  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:26:54.859481  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1026 08:26:54.879872  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 08:26:54.901572  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:26:54.935682  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 08:26:54.958197  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:26:54.977132  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:26:54.997988  193447 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:26:55.045773  193447 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:26:55.058926  193447 ssh_runner.go:195] Run: openssl version
	I1026 08:26:55.065508  193447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:26:55.074284  193447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:26:55.079158  193447 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:26:55.079216  193447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:26:55.148923  193447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:26:55.158019  193447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:26:55.167804  193447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:26:55.172195  193447 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:26:55.172300  193447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:26:55.217728  193447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:26:55.228622  193447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:26:55.238769  193447 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:26:55.243541  193447 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:26:55.243604  193447 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:26:55.282802  193447 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:26:55.291510  193447 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:26:55.295335  193447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:26:55.333452  193447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:26:55.372296  193447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:26:55.415092  193447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:26:55.450544  193447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:26:55.489397  193447 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:26:55.525516  193447 kubeadm.go:400] StartCluster: {Name:pause-504806 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-504806 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry
-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:26:55.525654  193447 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:26:55.525723  193447 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:26:55.560924  193447 cri.go:89] found id: "4142ecdf1fe30029dbfe7b06d257a2cd3f8a1a259d6e1e656fa68fb6b6f48f60"
	I1026 08:26:55.560948  193447 cri.go:89] found id: "65936c8bb6486e4c862dabe5143e4456e412dde42e71c121cca6af8ced39b26b"
	I1026 08:26:55.560953  193447 cri.go:89] found id: "bdbccec25f128438488e48390310c61c2e866b1dd32e2b66f0c12735a239f9b0"
	I1026 08:26:55.560958  193447 cri.go:89] found id: "d6bff8cede97952be272b19ac001db58dffafae8ec651ec1949c3946e1a69f0e"
	I1026 08:26:55.560963  193447 cri.go:89] found id: "e382b82319af9e2a4edf1b892db5b91bca0282ac246cfb7c71726684226b98ec"
	I1026 08:26:55.560967  193447 cri.go:89] found id: "8d285175a1f0637dbd439b948468559b78c5baf706c85d4392df8f983fb8db67"
	I1026 08:26:55.560971  193447 cri.go:89] found id: "fcec7a37f3c1b7712f65f3f276cd8dbcc20be3d019eba5ee54f6ecb649c99cc5"
	I1026 08:26:55.560975  193447 cri.go:89] found id: ""
	I1026 08:26:55.561021  193447 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 08:26:55.573145  193447 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:26:55Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:26:55.573227  193447 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:26:55.581811  193447 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:26:55.581831  193447 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:26:55.581879  193447 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:26:55.590131  193447 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:26:55.590724  193447 kubeconfig.go:125] found "pause-504806" server: "https://192.168.103.2:8443"
	I1026 08:26:55.591473  193447 kapi.go:59] client config for pause-504806: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/client.key", CAFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:26:55.591835  193447 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1026 08:26:55.591855  193447 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1026 08:26:55.591860  193447 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 08:26:55.591866  193447 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1026 08:26:55.591874  193447 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 08:26:55.592181  193447 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:26:55.604495  193447 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1026 08:26:55.604533  193447 kubeadm.go:601] duration metric: took 22.695286ms to restartPrimaryControlPlane
	I1026 08:26:55.604542  193447 kubeadm.go:402] duration metric: took 79.039298ms to StartCluster
	I1026 08:26:55.604561  193447 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:55.604633  193447 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:26:55.605721  193447 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:55.605977  193447 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:26:55.606045  193447 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:26:55.606329  193447 config.go:182] Loaded profile config "pause-504806": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:26:55.697507  193447 out.go:179] * Verifying Kubernetes components...
	I1026 08:26:55.697518  193447 out.go:179] * Enabled addons: 
	I1026 08:26:51.127379  194299 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 08:26:51.127635  194299 start.go:159] libmachine.API.Create for "kubernetes-upgrade-462840" (driver="docker")
	I1026 08:26:51.127671  194299 client.go:168] LocalClient.Create starting
	I1026 08:26:51.127749  194299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem
	I1026 08:26:51.127792  194299 main.go:141] libmachine: Decoding PEM data...
	I1026 08:26:51.127818  194299 main.go:141] libmachine: Parsing certificate...
	I1026 08:26:51.127896  194299 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem
	I1026 08:26:51.127923  194299 main.go:141] libmachine: Decoding PEM data...
	I1026 08:26:51.127937  194299 main.go:141] libmachine: Parsing certificate...
	I1026 08:26:51.128392  194299 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-462840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 08:26:51.148647  194299 cli_runner.go:211] docker network inspect kubernetes-upgrade-462840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 08:26:51.148739  194299 network_create.go:284] running [docker network inspect kubernetes-upgrade-462840] to gather additional debugging logs...
	I1026 08:26:51.148764  194299 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-462840
	W1026 08:26:51.173410  194299 cli_runner.go:211] docker network inspect kubernetes-upgrade-462840 returned with exit code 1
	I1026 08:26:51.173442  194299 network_create.go:287] error running [docker network inspect kubernetes-upgrade-462840]: docker network inspect kubernetes-upgrade-462840: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kubernetes-upgrade-462840 not found
	I1026 08:26:51.173458  194299 network_create.go:289] output of [docker network inspect kubernetes-upgrade-462840]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kubernetes-upgrade-462840 not found
	
	** /stderr **
	I1026 08:26:51.173610  194299 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:26:51.196117  194299 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c18b67b7e42d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:70:41:72:e4:6d} reservation:<nil>}
	I1026 08:26:51.196858  194299 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd6ed9f615a5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:78:96:65:8c:60} reservation:<nil>}
	I1026 08:26:51.197641  194299 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2a983bf4577 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:62:ae:31:43:82} reservation:<nil>}
	I1026 08:26:51.198471  194299 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c512b29df443 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:0a:67:8a:60:ac:da} reservation:<nil>}
	I1026 08:26:51.199649  194299 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ff8fd0}
	I1026 08:26:51.199739  194299 network_create.go:124] attempt to create docker network kubernetes-upgrade-462840 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1026 08:26:51.199820  194299 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kubernetes-upgrade-462840 kubernetes-upgrade-462840
	I1026 08:26:51.269670  194299 network_create.go:108] docker network kubernetes-upgrade-462840 192.168.85.0/24 created
	I1026 08:26:51.269724  194299 kic.go:121] calculated static IP "192.168.85.2" for the "kubernetes-upgrade-462840" container
	I1026 08:26:51.269807  194299 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 08:26:51.291278  194299 cli_runner.go:164] Run: docker volume create kubernetes-upgrade-462840 --label name.minikube.sigs.k8s.io=kubernetes-upgrade-462840 --label created_by.minikube.sigs.k8s.io=true
	I1026 08:26:51.312546  194299 oci.go:103] Successfully created a docker volume kubernetes-upgrade-462840
	I1026 08:26:51.312633  194299 cli_runner.go:164] Run: docker run --rm --name kubernetes-upgrade-462840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-462840 --entrypoint /usr/bin/test -v kubernetes-upgrade-462840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 08:26:54.294012  194299 cli_runner.go:217] Completed: docker run --rm --name kubernetes-upgrade-462840-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-462840 --entrypoint /usr/bin/test -v kubernetes-upgrade-462840:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib: (2.981333348s)
	I1026 08:26:54.294043  194299 oci.go:107] Successfully prepared a docker volume kubernetes-upgrade-462840
	I1026 08:26:54.294080  194299 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 08:26:54.294102  194299 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 08:26:54.294185  194299 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-462840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 08:26:54.065050  192378 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.3-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v missing-upgrade-300975:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 -I lz4 -xf /preloaded.tar -C /extractDir: (3.071097923s)
	I1026 08:26:54.065078  192378 kic.go:203] duration metric: took 3.071278 seconds to extract preloaded images to volume
	W1026 08:26:54.065164  192378 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 08:26:54.065188  192378 oci.go:243] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 08:26:54.065224  192378 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 08:26:54.134000  192378 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname missing-upgrade-300975 --name missing-upgrade-300975 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=missing-upgrade-300975 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=missing-upgrade-300975 --network missing-upgrade-300975 --ip 192.168.94.2 --volume missing-upgrade-300975:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0
	I1026 08:26:54.481544  192378 cli_runner.go:164] Run: docker container inspect missing-upgrade-300975 --format={{.State.Running}}
	I1026 08:26:54.503777  192378 cli_runner.go:164] Run: docker container inspect missing-upgrade-300975 --format={{.State.Status}}
	I1026 08:26:54.527782  192378 cli_runner.go:164] Run: docker exec missing-upgrade-300975 stat /var/lib/dpkg/alternatives/iptables
	I1026 08:26:54.579590  192378 oci.go:144] the created container "missing-upgrade-300975" has a running status.
	I1026 08:26:54.579615  192378 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa...
	I1026 08:26:54.755578  192378 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 08:26:54.795384  192378 cli_runner.go:164] Run: docker container inspect missing-upgrade-300975 --format={{.State.Status}}
	I1026 08:26:54.825957  192378 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 08:26:54.825979  192378 kic_runner.go:114] Args: [docker exec --privileged missing-upgrade-300975 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 08:26:54.885021  192378 cli_runner.go:164] Run: docker container inspect missing-upgrade-300975 --format={{.State.Status}}
	I1026 08:26:54.904044  192378 machine.go:88] provisioning docker machine ...
	I1026 08:26:54.904087  192378 ubuntu.go:169] provisioning hostname "missing-upgrade-300975"
	I1026 08:26:54.904157  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:54.924707  192378 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:54.934880  192378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1026 08:26:54.934903  192378 main.go:141] libmachine: About to run SSH command:
	sudo hostname missing-upgrade-300975 && echo "missing-upgrade-300975" | sudo tee /etc/hostname
	I1026 08:26:55.103426  192378 main.go:141] libmachine: SSH cmd err, output: <nil>: missing-upgrade-300975
	
	I1026 08:26:55.103523  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:55.121612  192378 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:55.121943  192378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1026 08:26:55.121956  192378 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smissing-upgrade-300975' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 missing-upgrade-300975/g' /etc/hosts;
				else 
					echo '127.0.1.1 missing-upgrade-300975' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:26:55.240022  192378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:26:55.240044  192378 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:26:55.240068  192378 ubuntu.go:177] setting up certificates
	I1026 08:26:55.240081  192378 provision.go:83] configureAuth start
	I1026 08:26:55.240159  192378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-300975
	I1026 08:26:55.260591  192378 provision.go:138] copyHostCerts
	I1026 08:26:55.260644  192378 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:26:55.260651  192378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:26:55.260703  192378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:26:55.260779  192378 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:26:55.260782  192378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:26:55.260806  192378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:26:55.260857  192378 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:26:55.260860  192378 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:26:55.260881  192378 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:26:55.260933  192378 provision.go:112] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.missing-upgrade-300975 san=[192.168.94.2 127.0.0.1 localhost 127.0.0.1 minikube missing-upgrade-300975]
	I1026 08:26:55.523936  192378 provision.go:172] copyRemoteCerts
	I1026 08:26:55.524010  192378 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:26:55.524056  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:55.545280  192378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa Username:docker}
	I1026 08:26:55.635371  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:26:55.727529  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1026 08:26:55.769263  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:26:55.798216  192378 provision.go:86] duration metric: configureAuth took 558.119947ms
	I1026 08:26:55.798242  192378 ubuntu.go:193] setting minikube options for container-runtime
	I1026 08:26:55.798472  192378 config.go:182] Loaded profile config "missing-upgrade-300975": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1026 08:26:55.798609  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:55.818927  192378 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:55.819426  192378 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x808a40] 0x80b720 <nil>  [] 0s} 127.0.0.1 32993 <nil> <nil>}
	I1026 08:26:55.819445  192378 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:26:56.050443  192378 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:26:56.050465  192378 machine.go:91] provisioned docker machine in 1.146396664s
	I1026 08:26:56.050474  192378 client.go:171] LocalClient.Create took 5.639347865s
	I1026 08:26:56.050494  192378 start.go:167] duration metric: libmachine.API.Create for "missing-upgrade-300975" took 5.63940715s
	I1026 08:26:56.050503  192378 start.go:300] post-start starting for "missing-upgrade-300975" (driver="docker")
	I1026 08:26:56.050515  192378 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:26:56.050581  192378 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:26:56.050618  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:56.071745  192378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa Username:docker}
	I1026 08:26:56.163189  192378 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:26:56.167015  192378 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:26:56.167069  192378 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1026 08:26:56.167080  192378 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1026 08:26:56.167086  192378 info.go:137] Remote host: Ubuntu 22.04.3 LTS
	I1026 08:26:56.167097  192378 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:26:56.167151  192378 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:26:56.167224  192378 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:26:56.167348  192378 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:26:56.177268  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:26:56.210005  192378 start.go:303] post-start completed in 159.484732ms
	I1026 08:26:56.210442  192378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-300975
	I1026 08:26:56.229581  192378 profile.go:148] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/config.json ...
	I1026 08:26:56.229841  192378 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:26:56.229883  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:56.248878  192378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa Username:docker}
	I1026 08:26:56.332728  192378 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:26:56.337035  192378 start.go:128] duration metric: createHost completed in 5.928199636s
	I1026 08:26:56.337049  192378 start.go:83] releasing machines lock for "missing-upgrade-300975", held for 5.92835033s
	I1026 08:26:56.337124  192378 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" missing-upgrade-300975
	I1026 08:26:56.357644  192378 ssh_runner.go:195] Run: cat /version.json
	I1026 08:26:56.357712  192378 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:26:56.357774  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:56.357798  192378 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" missing-upgrade-300975
	I1026 08:26:56.377616  192378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa Username:docker}
	I1026 08:26:56.378589  192378 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32993 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/missing-upgrade-300975/id_rsa Username:docker}
	I1026 08:26:56.555694  192378 ssh_runner.go:195] Run: systemctl --version
	I1026 08:26:56.560499  192378 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:26:55.699736  193447 addons.go:514] duration metric: took 93.697611ms for enable addons: enabled=[]
	I1026 08:26:55.699800  193447 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:26:55.849605  193447 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:26:55.865153  193447 node_ready.go:35] waiting up to 6m0s for node "pause-504806" to be "Ready" ...
	I1026 08:26:55.874496  193447 node_ready.go:49] node "pause-504806" is "Ready"
	I1026 08:26:55.874526  193447 node_ready.go:38] duration metric: took 9.338002ms for node "pause-504806" to be "Ready" ...
	I1026 08:26:55.874543  193447 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:26:55.874593  193447 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:26:55.888516  193447 api_server.go:72] duration metric: took 282.503468ms to wait for apiserver process to appear ...
	I1026 08:26:55.888546  193447 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:26:55.888570  193447 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 08:26:55.893345  193447 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 08:26:55.894481  193447 api_server.go:141] control plane version: v1.34.1
	I1026 08:26:55.894509  193447 api_server.go:131] duration metric: took 5.954771ms to wait for apiserver health ...
	I1026 08:26:55.894519  193447 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:26:55.898311  193447 system_pods.go:59] 7 kube-system pods found
	I1026 08:26:55.898344  193447 system_pods.go:61] "coredns-66bc5c9577-qcszn" [c3e0eaff-6a88-440e-98b7-9230b2966e07] Running
	I1026 08:26:55.898351  193447 system_pods.go:61] "etcd-pause-504806" [7e996a9f-c079-405e-8c07-e7cfa96c1c0a] Running
	I1026 08:26:55.898357  193447 system_pods.go:61] "kindnet-cjpzm" [0ee3c5e1-47fd-4318-9c90-f8eb93610ebf] Running
	I1026 08:26:55.898363  193447 system_pods.go:61] "kube-apiserver-pause-504806" [111783e2-0d07-4372-8f7f-7906dbb27b7b] Running
	I1026 08:26:55.898370  193447 system_pods.go:61] "kube-controller-manager-pause-504806" [d6770757-94b2-452d-8605-2864f08979fb] Running
	I1026 08:26:55.898375  193447 system_pods.go:61] "kube-proxy-9d7fv" [5884f8ce-f7c9-452b-b9b0-b025b0a22792] Running
	I1026 08:26:55.898381  193447 system_pods.go:61] "kube-scheduler-pause-504806" [8f8d83d3-0623-46cd-9f40-4aa50a5c7173] Running
	I1026 08:26:55.898388  193447 system_pods.go:74] duration metric: took 3.862442ms to wait for pod list to return data ...
	I1026 08:26:55.898403  193447 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:26:55.900956  193447 default_sa.go:45] found service account: "default"
	I1026 08:26:55.900979  193447 default_sa.go:55] duration metric: took 2.569042ms for default service account to be created ...
	I1026 08:26:55.900989  193447 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:26:55.903903  193447 system_pods.go:86] 7 kube-system pods found
	I1026 08:26:55.903933  193447 system_pods.go:89] "coredns-66bc5c9577-qcszn" [c3e0eaff-6a88-440e-98b7-9230b2966e07] Running
	I1026 08:26:55.903943  193447 system_pods.go:89] "etcd-pause-504806" [7e996a9f-c079-405e-8c07-e7cfa96c1c0a] Running
	I1026 08:26:55.903948  193447 system_pods.go:89] "kindnet-cjpzm" [0ee3c5e1-47fd-4318-9c90-f8eb93610ebf] Running
	I1026 08:26:55.903954  193447 system_pods.go:89] "kube-apiserver-pause-504806" [111783e2-0d07-4372-8f7f-7906dbb27b7b] Running
	I1026 08:26:55.903960  193447 system_pods.go:89] "kube-controller-manager-pause-504806" [d6770757-94b2-452d-8605-2864f08979fb] Running
	I1026 08:26:55.903974  193447 system_pods.go:89] "kube-proxy-9d7fv" [5884f8ce-f7c9-452b-b9b0-b025b0a22792] Running
	I1026 08:26:55.903980  193447 system_pods.go:89] "kube-scheduler-pause-504806" [8f8d83d3-0623-46cd-9f40-4aa50a5c7173] Running
	I1026 08:26:55.903989  193447 system_pods.go:126] duration metric: took 2.992978ms to wait for k8s-apps to be running ...
	I1026 08:26:55.904004  193447 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:26:55.904051  193447 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:26:55.918664  193447 system_svc.go:56] duration metric: took 14.654221ms WaitForService to wait for kubelet
	I1026 08:26:55.918695  193447 kubeadm.go:586] duration metric: took 312.686511ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:26:55.918719  193447 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:26:55.921686  193447 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:26:55.921718  193447 node_conditions.go:123] node cpu capacity is 8
	I1026 08:26:55.921728  193447 node_conditions.go:105] duration metric: took 3.00404ms to run NodePressure ...
	I1026 08:26:55.921739  193447 start.go:241] waiting for startup goroutines ...
	I1026 08:26:55.921745  193447 start.go:246] waiting for cluster config update ...
	I1026 08:26:55.921752  193447 start.go:255] writing updated cluster config ...
	I1026 08:26:55.921991  193447 ssh_runner.go:195] Run: rm -f paused
	I1026 08:26:55.926116  193447 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:26:55.926722  193447 kapi.go:59] client config for pause-504806: &rest.Config{Host:"https://192.168.103.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/profiles/pause-504806/client.key", CAFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string
(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:26:55.929748  193447 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-qcszn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:55.934034  193447 pod_ready.go:94] pod "coredns-66bc5c9577-qcszn" is "Ready"
	I1026 08:26:55.934061  193447 pod_ready.go:86] duration metric: took 4.289515ms for pod "coredns-66bc5c9577-qcszn" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:55.936054  193447 pod_ready.go:83] waiting for pod "etcd-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:55.940341  193447 pod_ready.go:94] pod "etcd-pause-504806" is "Ready"
	I1026 08:26:55.940367  193447 pod_ready.go:86] duration metric: took 4.289908ms for pod "etcd-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:55.942244  193447 pod_ready.go:83] waiting for pod "kube-apiserver-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:55.946698  193447 pod_ready.go:94] pod "kube-apiserver-pause-504806" is "Ready"
	I1026 08:26:55.946721  193447 pod_ready.go:86] duration metric: took 4.44185ms for pod "kube-apiserver-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:55.948826  193447 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:56.330033  193447 pod_ready.go:94] pod "kube-controller-manager-pause-504806" is "Ready"
	I1026 08:26:56.330070  193447 pod_ready.go:86] duration metric: took 381.221119ms for pod "kube-controller-manager-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:56.530117  193447 pod_ready.go:83] waiting for pod "kube-proxy-9d7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:56.931217  193447 pod_ready.go:94] pod "kube-proxy-9d7fv" is "Ready"
	I1026 08:26:56.931267  193447 pod_ready.go:86] duration metric: took 401.126477ms for pod "kube-proxy-9d7fv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:57.130789  193447 pod_ready.go:83] waiting for pod "kube-scheduler-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:57.529605  193447 pod_ready.go:94] pod "kube-scheduler-pause-504806" is "Ready"
	I1026 08:26:57.529632  193447 pod_ready.go:86] duration metric: took 398.812394ms for pod "kube-scheduler-pause-504806" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:26:57.529646  193447 pod_ready.go:40] duration metric: took 1.603499393s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:26:57.573707  193447 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:26:57.606645  193447 out.go:179] * Done! kubectl is now configured to use "pause-504806" cluster and "default" namespace by default
	I1026 08:26:56.702682  192378 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1026 08:26:56.707334  192378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:26:56.732157  192378 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1026 08:26:56.732239  192378 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:26:56.764868  192378 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1026 08:26:56.764886  192378 start.go:472] detecting cgroup driver to use...
	I1026 08:26:56.764924  192378 detect.go:199] detected "systemd" cgroup driver on host os
	I1026 08:26:56.765051  192378 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:26:56.781999  192378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:26:56.794411  192378 docker.go:203] disabling cri-docker service (if available) ...
	I1026 08:26:56.794456  192378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:26:56.810551  192378 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:26:56.827230  192378 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:26:56.900863  192378 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:26:56.982929  192378 docker.go:219] disabling docker service ...
	I1026 08:26:56.982994  192378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:26:57.001649  192378 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:26:57.013594  192378 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:26:57.088338  192378 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:26:57.292705  192378 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:26:57.305150  192378 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:26:57.323022  192378 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 08:26:57.323078  192378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:57.339517  192378 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:26:57.339571  192378 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:57.351281  192378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:57.361861  192378 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:26:57.383739  192378 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:26:57.402108  192378 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:26:57.413391  192378 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:26:57.423459  192378 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:26:57.488996  192378 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:26:58.996072  192378 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.507042374s)
	I1026 08:26:58.996103  192378 start.go:519] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:26:58.996157  192378 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:26:59.000693  192378 start.go:540] Will wait 60s for crictl version
	I1026 08:26:59.000762  192378 ssh_runner.go:195] Run: which crictl
	I1026 08:26:59.005279  192378 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1026 08:26:59.047366  192378 start.go:556] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1026 08:26:59.047462  192378 ssh_runner.go:195] Run: crio --version
	I1026 08:26:59.084072  192378 ssh_runner.go:195] Run: crio --version
	I1026 08:26:59.126693  192378 out.go:177] * Preparing Kubernetes v1.28.3 on CRI-O 1.24.6 ...
	I1026 08:26:58.905701  194299 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kubernetes-upgrade-462840:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.611480367s)
	I1026 08:26:58.905732  194299 kic.go:203] duration metric: took 4.611626367s to extract preloaded images to volume ...
	W1026 08:26:58.905822  194299 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 08:26:58.905865  194299 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 08:26:58.905901  194299 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 08:26:58.969829  194299 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubernetes-upgrade-462840 --name kubernetes-upgrade-462840 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubernetes-upgrade-462840 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubernetes-upgrade-462840 --network kubernetes-upgrade-462840 --ip 192.168.85.2 --volume kubernetes-upgrade-462840:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 08:26:59.267278  194299 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-462840 --format={{.State.Running}}
	I1026 08:26:59.288953  194299 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-462840 --format={{.State.Status}}
	I1026 08:26:59.308385  194299 cli_runner.go:164] Run: docker exec kubernetes-upgrade-462840 stat /var/lib/dpkg/alternatives/iptables
	I1026 08:26:59.353657  194299 oci.go:144] the created container "kubernetes-upgrade-462840" has a running status.
	I1026 08:26:59.353698  194299 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/kubernetes-upgrade-462840/id_rsa...
	I1026 08:26:59.385576  194299 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-9429/.minikube/machines/kubernetes-upgrade-462840/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 08:26:59.417513  194299 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-462840 --format={{.State.Status}}
	I1026 08:26:59.434162  194299 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 08:26:59.434180  194299 kic_runner.go:114] Args: [docker exec --privileged kubernetes-upgrade-462840 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 08:26:59.476398  194299 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-462840 --format={{.State.Status}}
	I1026 08:26:59.499114  194299 machine.go:93] provisionDockerMachine start ...
	I1026 08:26:59.499223  194299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-462840
	I1026 08:26:59.522365  194299 main.go:141] libmachine: Using SSH client type: native
	I1026 08:26:59.522695  194299 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32998 <nil> <nil>}
	I1026 08:26:59.522716  194299 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:26:59.523540  194299 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:34690->127.0.0.1:32998: read: connection reset by peer
	I1026 08:26:59.128044  192378 cli_runner.go:164] Run: docker network inspect missing-upgrade-300975 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:26:59.148180  192378 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1026 08:26:59.152106  192378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:26:59.164301  192378 preload.go:132] Checking if preload exists for k8s version v1.28.3 and runtime crio
	I1026 08:26:59.164364  192378 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:26:59.225917  192378 crio.go:496] all images are preloaded for cri-o runtime.
	I1026 08:26:59.225934  192378 crio.go:415] Images already preloaded, skipping extraction
	I1026 08:26:59.225983  192378 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:26:59.263720  192378 crio.go:496] all images are preloaded for cri-o runtime.
	I1026 08:26:59.263736  192378 cache_images.go:84] Images are preloaded, skipping loading
	I1026 08:26:59.263815  192378 ssh_runner.go:195] Run: crio config
	I1026 08:26:59.313183  192378 cni.go:84] Creating CNI manager for ""
	I1026 08:26:59.313197  192378 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:26:59.313218  192378 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I1026 08:26:59.313281  192378 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:missing-upgrade-300975 NodeName:missing-upgrade-300975 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:26:59.313456  192378 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "missing-upgrade-300975"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:26:59.313535  192378 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --enforce-node-allocatable= --hostname-override=missing-upgrade-300975 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-300975 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I1026 08:26:59.313597  192378 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.3
	I1026 08:26:59.323887  192378 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:26:59.323962  192378 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:26:59.334291  192378 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (432 bytes)
	I1026 08:26:59.354922  192378 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:26:59.379158  192378 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I1026 08:26:59.398361  192378 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:26:59.402671  192378 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:26:59.420156  192378 certs.go:56] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975 for IP: 192.168.94.2
	I1026 08:26:59.420198  192378 certs.go:190] acquiring lock for shared ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:59.420389  192378 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:26:59.420488  192378 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:26:59.420558  192378 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/client.key
	I1026 08:26:59.420571  192378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/client.crt with IP's: []
	I1026 08:26:59.524767  192378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/client.crt ...
	I1026 08:26:59.524792  192378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/client.crt: {Name:mk2e77a994789b45978919e3d333782e6a6b2704 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:59.524998  192378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/client.key ...
	I1026 08:26:59.525012  192378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/client.key: {Name:mk7fdb1f692d83fb63b248b06621afb82654308a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:59.525159  192378 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/apiserver.key.ad8e880a
	I1026 08:26:59.525175  192378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/apiserver.crt.ad8e880a with IP's: [192.168.94.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I1026 08:26:59.744304  192378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/apiserver.crt.ad8e880a ...
	I1026 08:26:59.744327  192378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/apiserver.crt.ad8e880a: {Name:mk695665c915a6e99cacfaac7f2436d4a93c9d2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:59.744519  192378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/apiserver.key.ad8e880a ...
	I1026 08:26:59.744531  192378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/apiserver.key.ad8e880a: {Name:mke6542af0d2a8fcb44f0b69d0133762077bd8c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:59.744635  192378 certs.go:337] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/apiserver.crt.ad8e880a -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/apiserver.crt
	I1026 08:26:59.744735  192378 certs.go:341] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/apiserver.key.ad8e880a -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/apiserver.key
	I1026 08:26:59.744824  192378 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/proxy-client.key
	I1026 08:26:59.744837  192378 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/proxy-client.crt with IP's: []
	I1026 08:26:59.954452  192378 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/proxy-client.crt ...
	I1026 08:26:59.954469  192378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/proxy-client.crt: {Name:mk1747b4f499f506b63dc4d95d0f47058f89b90b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:59.954619  192378 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/proxy-client.key ...
	I1026 08:26:59.954627  192378 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/proxy-client.key: {Name:mk14631b3500965ed863ba77cb6fa837cb8a9e12 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:26:59.954804  192378 certs.go:437] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:26:59.954834  192378 certs.go:433] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:26:59.954842  192378 certs.go:437] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:26:59.954868  192378 certs.go:437] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:26:59.954890  192378 certs.go:437] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:26:59.954908  192378 certs.go:437] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:26:59.954941  192378 certs.go:437] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:26:59.955647  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I1026 08:26:59.981118  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 08:27:00.005925  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:27:00.030494  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/missing-upgrade-300975/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 08:27:00.054926  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:27:00.078946  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:27:00.103425  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:27:00.127919  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:27:00.153368  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:27:00.181765  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:27:00.210167  192378 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:27:00.234916  192378 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:27:00.254089  192378 ssh_runner.go:195] Run: openssl version
	I1026 08:27:00.259703  192378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:27:00.269860  192378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:27:00.273470  192378 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:27:00.273521  192378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:27:00.280576  192378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:27:00.292919  192378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:27:00.304890  192378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:27:00.309386  192378 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:27:00.309442  192378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:27:00.317799  192378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:27:00.328559  192378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:27:00.339296  192378 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:27:00.343115  192378 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:27:00.343157  192378 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:27:00.350443  192378 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:27:00.360469  192378 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I1026 08:27:00.364855  192378 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I1026 08:27:00.364908  192378 kubeadm.go:404] StartCluster: {Name:missing-upgrade-300975 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.42@sha256:d35ac07dfda971cabee05e0deca8aeac772f885a5348e1a0c0b0a36db20fcfc0 Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.3 ClusterName:missing-upgrade-300975 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.3 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s GPUs:}
	I1026 08:27:00.364986  192378 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:27:00.365039  192378 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:27:00.403138  192378 cri.go:89] found id: ""
	I1026 08:27:00.403207  192378 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:27:00.413599  192378 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 08:27:00.423705  192378 kubeadm.go:226] ignoring SystemVerification for kubeadm because of docker driver
	I1026 08:27:00.423757  192378 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 08:27:00.433961  192378 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 08:27:00.433999  192378 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 08:27:00.524279  192378 kubeadm.go:322] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 08:27:00.591165  192378 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	
	==> CRI-O <==
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.15540385Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.156302443Z" level=info msg="Conmon does support the --sync option"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.156324712Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.15634332Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.157216654Z" level=info msg="Conmon does support the --sync option"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.157237385Z" level=info msg="Conmon does support the --log-global-size-max option"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.161619028Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.161652248Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.162421719Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.162986006Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.16305232Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.169402883Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.225688677Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-qcszn Namespace:kube-system ID:af96428e49f357c491cdb5ba06ae603f3f368988f3756bf4df4544b02c993719 UID:c3e0eaff-6a88-440e-98b7-9230b2966e07 NetNS:/var/run/netns/adbde4ba-90c8-4abf-b098-d89ccbbbe432 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000132740}] Aliases:map[]}"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.225891643Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-qcszn for CNI network kindnet (type=ptp)"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.22643727Z" level=info msg="Registered SIGHUP reload watcher"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226469799Z" level=info msg="Starting seccomp notifier watcher"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226528891Z" level=info msg="Create NRI interface"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226666918Z" level=info msg="built-in NRI default validator is disabled"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226684014Z" level=info msg="runtime interface created"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226697494Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226706825Z" level=info msg="runtime interface starting up..."
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226713977Z" level=info msg="starting plugins..."
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.226728877Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Oct 26 08:26:54 pause-504806 crio[2136]: time="2025-10-26T08:26:54.227327011Z" level=info msg="No systemd watchdog enabled"
	Oct 26 08:26:54 pause-504806 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	4142ecdf1fe30       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   19 seconds ago      Running             coredns                   0                   af96428e49f35       coredns-66bc5c9577-qcszn               kube-system
	65936c8bb6486       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   30 seconds ago      Running             kube-proxy                0                   1352f2732c108       kube-proxy-9d7fv                       kube-system
	bdbccec25f128       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   30 seconds ago      Running             kindnet-cni               0                   82ec617f17e3a       kindnet-cjpzm                          kube-system
	d6bff8cede979       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   41 seconds ago      Running             kube-apiserver            0                   ae5517cc6556e       kube-apiserver-pause-504806            kube-system
	e382b82319af9       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   41 seconds ago      Running             kube-controller-manager   0                   54dec7c674d2a       kube-controller-manager-pause-504806   kube-system
	8d285175a1f06       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   41 seconds ago      Running             kube-scheduler            0                   117c4d06d9e91       kube-scheduler-pause-504806            kube-system
	fcec7a37f3c1b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   41 seconds ago      Running             etcd                      0                   24e09776faef7       etcd-pause-504806                      kube-system
	
	
	==> coredns [4142ecdf1fe30029dbfe7b06d257a2cd3f8a1a259d6e1e656fa68fb6b6f48f60] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:42587 - 33191 "HINFO IN 4207833582110556143.1016658930209020495. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018687599s
	
	
	==> describe nodes <==
	Name:               pause-504806
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-504806
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=pause-504806
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_26_27_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:26:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-504806
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:26:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:26:43 +0000   Sun, 26 Oct 2025 08:26:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:26:43 +0000   Sun, 26 Oct 2025 08:26:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:26:43 +0000   Sun, 26 Oct 2025 08:26:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:26:43 +0000   Sun, 26 Oct 2025 08:26:43 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    pause-504806
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                34f52b80-d738-4d86-b17a-bcff33c913fb
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-qcszn                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     32s
	  kube-system                 etcd-pause-504806                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         37s
	  kube-system                 kindnet-cjpzm                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      32s
	  kube-system                 kube-apiserver-pause-504806             250m (3%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-controller-manager-pause-504806    200m (2%)     0 (0%)      0 (0%)           0 (0%)         38s
	  kube-system                 kube-proxy-9d7fv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-scheduler-pause-504806             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 30s                kube-proxy       
	  Normal  Starting                 43s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  43s (x8 over 43s)  kubelet          Node pause-504806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    43s (x8 over 43s)  kubelet          Node pause-504806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     43s (x8 over 43s)  kubelet          Node pause-504806 status is now: NodeHasSufficientPID
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s                kubelet          Node pause-504806 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s                kubelet          Node pause-504806 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s                kubelet          Node pause-504806 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           33s                node-controller  Node pause-504806 event: Registered Node pause-504806 in Controller
	  Normal  NodeReady                20s                kubelet          Node pause-504806 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [fcec7a37f3c1b7712f65f3f276cd8dbcc20be3d019eba5ee54f6ecb649c99cc5] <==
	{"level":"warn","ts":"2025-10-26T08:26:22.713529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.724309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.736339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.747889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.756681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.774884Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.790328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56244","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.804675Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.817423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.831228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.836279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.849552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.858540Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56450","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.868753Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56516","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.893440Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.904741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.913712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.922436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.940691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.947275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.958584Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:22.982299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:26:23.051822Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56930","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T08:26:43.402113Z","caller":"traceutil/trace.go:172","msg":"trace[1296222753] transaction","detail":"{read_only:false; response_revision:418; number_of_response:1; }","duration":"110.889346ms","start":"2025-10-26T08:26:43.291203Z","end":"2025-10-26T08:26:43.402093Z","steps":["trace[1296222753] 'process raft request'  (duration: 110.749714ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:26:43.504013Z","caller":"traceutil/trace.go:172","msg":"trace[1699892003] transaction","detail":"{read_only:false; response_revision:419; number_of_response:1; }","duration":"208.551964ms","start":"2025-10-26T08:26:43.295441Z","end":"2025-10-26T08:26:43.503993Z","steps":["trace[1699892003] 'process raft request'  (duration: 208.390223ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:27:03 up  1:09,  0 user,  load average: 5.25, 2.26, 1.40
	Linux pause-504806 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bdbccec25f128438488e48390310c61c2e866b1dd32e2b66f0c12735a239f9b0] <==
	I1026 08:26:32.532830       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:26:32.533314       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1026 08:26:32.533467       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:26:32.533486       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:26:32.533506       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:26:32Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:26:32.739784       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:26:32.739810       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:26:32.739822       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:26:32.740115       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:26:33.103684       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:26:33.103726       1 metrics.go:72] Registering metrics
	I1026 08:26:33.103800       1 controller.go:711] "Syncing nftables rules"
	I1026 08:26:42.740289       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:26:42.740379       1 main.go:301] handling current node
	I1026 08:26:52.743818       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:26:52.743853       1 main.go:301] handling current node
	I1026 08:27:02.747122       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:27:02.747172       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d6bff8cede97952be272b19ac001db58dffafae8ec651ec1949c3946e1a69f0e] <==
	I1026 08:26:23.849494       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 08:26:23.849540       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 08:26:23.849739       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 08:26:23.850566       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 08:26:23.856564       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:26:23.858010       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:26:23.865638       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:26:23.866007       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 08:26:24.752023       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 08:26:24.755605       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 08:26:24.755624       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:26:25.284751       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:26:25.352015       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:26:25.460531       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 08:26:25.473134       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1026 08:26:25.474487       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:26:25.482887       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:26:25.781386       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:26:26.510717       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:26:26.521831       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 08:26:26.530279       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 08:26:31.535954       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:26:31.549342       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:26:31.584897       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:26:31.881935       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [e382b82319af9e2a4edf1b892db5b91bca0282ac246cfb7c71726684226b98ec] <==
	I1026 08:26:30.733886       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 08:26:30.734574       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:26:30.743002       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-504806" podCIDRs=["10.244.0.0/24"]
	I1026 08:26:30.743169       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 08:26:30.751150       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 08:26:30.777360       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 08:26:30.778441       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 08:26:30.778462       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 08:26:30.778504       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 08:26:30.778532       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 08:26:30.778639       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 08:26:30.778668       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 08:26:30.778679       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 08:26:30.780924       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 08:26:30.785131       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 08:26:30.785159       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:26:30.785185       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:26:30.787481       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 08:26:30.787618       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 08:26:30.796196       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 08:26:30.811869       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:26:30.827550       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:26:30.827569       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 08:26:30.827574       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 08:26:45.968789       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [65936c8bb6486e4c862dabe5143e4456e412dde42e71c121cca6af8ced39b26b] <==
	I1026 08:26:32.330923       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:26:32.398303       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:26:32.499188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:26:32.499225       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1026 08:26:32.499334       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:26:32.520561       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:26:32.520623       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:26:32.527090       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:26:32.527614       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:26:32.527667       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:26:32.529300       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:26:32.529332       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:26:32.529347       1 config.go:200] "Starting service config controller"
	I1026 08:26:32.529353       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:26:32.529399       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:26:32.529409       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:26:32.529430       1 config.go:309] "Starting node config controller"
	I1026 08:26:32.529443       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:26:32.529450       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:26:32.630268       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:26:32.630308       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 08:26:32.630308       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [8d285175a1f0637dbd439b948468559b78c5baf706c85d4392df8f983fb8db67] <==
	I1026 08:26:24.360029       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:26:24.362042       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:26:24.362077       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:26:24.362453       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 08:26:24.362526       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1026 08:26:24.364419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 08:26:24.364538       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 08:26:24.365620       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:26:24.368276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:26:24.368526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 08:26:24.368553       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:26:24.368642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:26:24.368653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 08:26:24.368702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 08:26:24.368717       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:26:24.368736       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:26:24.368798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:26:24.368815       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:26:24.368820       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:26:24.368833       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 08:26:24.368908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:26:24.368926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 08:26:24.368989       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:26:24.368933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1026 08:26:25.962193       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:26:52 pause-504806 kubelet[1290]: E1026 08:26:52.389173    1290 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:52 pause-504806 kubelet[1290]: E1026 08:26:52.389186    1290 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:52 pause-504806 kubelet[1290]: E1026 08:26:52.464636    1290 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 26 08:26:52 pause-504806 kubelet[1290]: E1026 08:26:52.464713    1290 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:52 pause-504806 kubelet[1290]: E1026 08:26:52.464732    1290 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:52 pause-504806 kubelet[1290]: W1026 08:26:52.490011    1290 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 08:26:52 pause-504806 kubelet[1290]: W1026 08:26:52.618936    1290 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 08:26:52 pause-504806 kubelet[1290]: W1026 08:26:52.868400    1290 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 08:26:53 pause-504806 kubelet[1290]: W1026 08:26:53.264921    1290 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 08:26:53 pause-504806 kubelet[1290]: E1026 08:26:53.465181    1290 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 26 08:26:53 pause-504806 kubelet[1290]: E1026 08:26:53.465237    1290 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:53 pause-504806 kubelet[1290]: E1026 08:26:53.465278    1290 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:53 pause-504806 kubelet[1290]: W1026 08:26:53.851111    1290 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.388237    1290 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.388372    1290 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.388392    1290 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.388404    1290 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.466418    1290 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.466512    1290 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:54 pause-504806 kubelet[1290]: E1026 08:26:54.466544    1290 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Oct 26 08:26:58 pause-504806 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 08:26:58 pause-504806 kubelet[1290]: I1026 08:26:58.326139    1290 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 26 08:26:58 pause-504806 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 08:26:58 pause-504806 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 08:26:58 pause-504806 systemd[1]: kubelet.service: Consumed 1.320s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-504806 -n pause-504806
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-504806 -n pause-504806: exit status 2 (337.780516ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-504806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-810379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-810379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (272.783224ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:30:05Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-810379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-810379 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-810379 describe deploy/metrics-server -n kube-system: exit status 1 (79.124745ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-810379 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-810379
helpers_test.go:243: (dbg) docker inspect old-k8s-version-810379:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c",
	        "Created": "2025-10-26T08:29:09.042514733Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 229994,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:29:09.079737989Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c/hosts",
	        "LogPath": "/var/lib/docker/containers/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c-json.log",
	        "Name": "/old-k8s-version-810379",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-810379:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-810379",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c",
	                "LowerDir": "/var/lib/docker/overlay2/25870ec5365b41162d2a473a99dee21dda977cccb4c0d926dadb2870c0847e37-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25870ec5365b41162d2a473a99dee21dda977cccb4c0d926dadb2870c0847e37/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25870ec5365b41162d2a473a99dee21dda977cccb4c0d926dadb2870c0847e37/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25870ec5365b41162d2a473a99dee21dda977cccb4c0d926dadb2870c0847e37/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-810379",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-810379/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-810379",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-810379",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-810379",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "06e580aa720326753fbad921e2961ae3517aa1b7432be5dc1c7cce707e2f3b86",
	            "SandboxKey": "/var/run/docker/netns/06e580aa7203",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33043"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33044"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33047"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33045"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33046"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-810379": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:f5:4d:eb:30:6c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19bd044ce129aeaf476dbf54add850f4fcc444c6e57c15a6d61eea854dbd9172",
	                    "EndpointID": "a27c846038ab6556acff353da2968b8c24b60168828fcd2cee2895c066d345b0",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-810379",
	                        "ccdf5b36aedf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-810379 -n old-k8s-version-810379
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-810379 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-810379 logs -n 25: (1.103636584s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-110992 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo containerd config dump                                                                                                                                                                                                  │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo crio config                                                                                                                                                                                                             │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p cilium-110992                                                                                                                                                                                                                              │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ delete  │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p cert-expiration-535689 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-535689 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ ssh     │ -p NoKubernetes-815548 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p cert-expiration-535689                                                                                                                                                                                                                     │ cert-expiration-535689 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ stop    │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ ssh     │ -p NoKubernetes-815548 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-810379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:29:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:29:58.045059  243672 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:29:58.045351  243672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:29:58.045363  243672 out.go:374] Setting ErrFile to fd 2...
	I1026 08:29:58.045369  243672 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:29:58.045555  243672 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:29:58.046028  243672 out.go:368] Setting JSON to false
	I1026 08:29:58.047099  243672 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4349,"bootTime":1761463049,"procs":313,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:29:58.047176  243672 start.go:141] virtualization: kvm guest
	I1026 08:29:58.049132  243672 out.go:179] * [embed-certs-752315] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:29:58.050217  243672 notify.go:220] Checking for updates...
	I1026 08:29:58.050262  243672 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:29:58.051240  243672 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:29:58.053045  243672 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:29:58.054233  243672 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:29:58.055362  243672 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:29:58.056350  243672 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:29:58.057670  243672 config.go:182] Loaded profile config "kubernetes-upgrade-462840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:29:58.057758  243672 config.go:182] Loaded profile config "no-preload-001983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:29:58.057828  243672 config.go:182] Loaded profile config "old-k8s-version-810379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 08:29:58.057899  243672 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:29:58.081515  243672 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:29:58.081615  243672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:29:58.143419  243672 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-10-26 08:29:58.132952954 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:29:58.143573  243672 docker.go:318] overlay module found
	I1026 08:29:58.145167  243672 out.go:179] * Using the docker driver based on user configuration
	I1026 08:29:58.146395  243672 start.go:305] selected driver: docker
	I1026 08:29:58.146411  243672 start.go:925] validating driver "docker" against <nil>
	I1026 08:29:58.146424  243672 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:29:58.147188  243672 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:29:58.207993  243672 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:91 SystemTime:2025-10-26 08:29:58.196682984 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:29:58.208196  243672 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:29:58.208465  243672 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:29:58.210153  243672 out.go:179] * Using Docker driver with root privileges
	I1026 08:29:58.211294  243672 cni.go:84] Creating CNI manager for ""
	I1026 08:29:58.211367  243672 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:29:58.211379  243672 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 08:29:58.211450  243672 start.go:349] cluster config:
	{Name:embed-certs-752315 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-752315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:29:58.212748  243672 out.go:179] * Starting "embed-certs-752315" primary control-plane node in "embed-certs-752315" cluster
	I1026 08:29:58.213949  243672 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:29:58.215111  243672 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:29:58.216074  243672 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:29:58.216106  243672 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:29:58.216112  243672 cache.go:58] Caching tarball of preloaded images
	I1026 08:29:58.216181  243672 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:29:58.216194  243672 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:29:58.216218  243672 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:29:58.216322  243672 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/config.json ...
	I1026 08:29:58.216342  243672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/config.json: {Name:mk87f9b23a535fd5c977fd69a51d91d6ddcbceee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:29:58.237389  243672 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:29:58.237412  243672 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:29:58.237431  243672 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:29:58.237465  243672 start.go:360] acquireMachinesLock for embed-certs-752315: {Name:mke5e92fe2bbc27b2e8ece3d6f167d2db37c8fc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:29:58.237575  243672 start.go:364] duration metric: took 90.227µs to acquireMachinesLock for "embed-certs-752315"
	I1026 08:29:58.237602  243672 start.go:93] Provisioning new machine with config: &{Name:embed-certs-752315 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-752315 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:29:58.237685  243672 start.go:125] createHost starting for "" (driver="docker")
	I1026 08:29:57.667528  237215 out.go:252]   - Booting up control plane ...
	I1026 08:29:57.667622  237215 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 08:29:57.667758  237215 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 08:29:57.668505  237215 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 08:29:57.682854  237215 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 08:29:57.683005  237215 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 08:29:57.690011  237215 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 08:29:57.690403  237215 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 08:29:57.690481  237215 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 08:29:57.796317  237215 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 08:29:57.796487  237215 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 08:29:58.797308  237215 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001019235s
	I1026 08:29:58.802483  237215 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 08:29:58.802655  237215 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1026 08:29:58.802772  237215 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 08:29:58.802902  237215 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 08:29:58.705350  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:29:58.705757  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:29:58.705812  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:29:58.705870  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:29:58.735008  204716 cri.go:89] found id: "20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:29:58.735032  204716 cri.go:89] found id: ""
	I1026 08:29:58.735040  204716 logs.go:282] 1 containers: [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca]
	I1026 08:29:58.735101  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:29:58.739142  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:29:58.739198  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:29:58.775132  204716 cri.go:89] found id: ""
	I1026 08:29:58.775155  204716 logs.go:282] 0 containers: []
	W1026 08:29:58.775164  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:29:58.775169  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:29:58.775224  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:29:58.809530  204716 cri.go:89] found id: ""
	I1026 08:29:58.809553  204716 logs.go:282] 0 containers: []
	W1026 08:29:58.809562  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:29:58.809567  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:29:58.809612  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:29:58.838955  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:29:58.839015  204716 cri.go:89] found id: ""
	I1026 08:29:58.839025  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:29:58.839084  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:29:58.843424  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:29:58.843488  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:29:58.874204  204716 cri.go:89] found id: ""
	I1026 08:29:58.874232  204716 logs.go:282] 0 containers: []
	W1026 08:29:58.874242  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:29:58.874260  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:29:58.874321  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:29:58.901899  204716 cri.go:89] found id: "ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:29:58.901924  204716 cri.go:89] found id: ""
	I1026 08:29:58.901934  204716 logs.go:282] 1 containers: [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03]
	I1026 08:29:58.901996  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:29:58.906236  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:29:58.906348  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:29:58.937359  204716 cri.go:89] found id: ""
	I1026 08:29:58.937389  204716 logs.go:282] 0 containers: []
	W1026 08:29:58.937401  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:29:58.937408  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:29:58.937466  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:29:58.968790  204716 cri.go:89] found id: ""
	I1026 08:29:58.968815  204716 logs.go:282] 0 containers: []
	W1026 08:29:58.968824  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:29:58.968834  204716 logs.go:123] Gathering logs for kube-controller-manager [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03] ...
	I1026 08:29:58.968853  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:29:58.999525  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:29:58.999554  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:29:59.055091  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:29:59.055123  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:29:59.088878  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:29:59.088914  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:29:59.216382  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:29:59.216414  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:29:59.240212  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:29:59.240271  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:29:59.326325  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:29:59.326353  204716 logs.go:123] Gathering logs for kube-apiserver [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca] ...
	I1026 08:29:59.326368  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:29:59.370677  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:29:59.370708  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:29:58.239601  243672 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 08:29:58.239778  243672 start.go:159] libmachine.API.Create for "embed-certs-752315" (driver="docker")
	I1026 08:29:58.239804  243672 client.go:168] LocalClient.Create starting
	I1026 08:29:58.239897  243672 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem
	I1026 08:29:58.239931  243672 main.go:141] libmachine: Decoding PEM data...
	I1026 08:29:58.239948  243672 main.go:141] libmachine: Parsing certificate...
	I1026 08:29:58.239998  243672 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem
	I1026 08:29:58.240016  243672 main.go:141] libmachine: Decoding PEM data...
	I1026 08:29:58.240042  243672 main.go:141] libmachine: Parsing certificate...
	I1026 08:29:58.240382  243672 cli_runner.go:164] Run: docker network inspect embed-certs-752315 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 08:29:58.256871  243672 cli_runner.go:211] docker network inspect embed-certs-752315 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 08:29:58.256942  243672 network_create.go:284] running [docker network inspect embed-certs-752315] to gather additional debugging logs...
	I1026 08:29:58.256960  243672 cli_runner.go:164] Run: docker network inspect embed-certs-752315
	W1026 08:29:58.273440  243672 cli_runner.go:211] docker network inspect embed-certs-752315 returned with exit code 1
	I1026 08:29:58.273471  243672 network_create.go:287] error running [docker network inspect embed-certs-752315]: docker network inspect embed-certs-752315: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-752315 not found
	I1026 08:29:58.273485  243672 network_create.go:289] output of [docker network inspect embed-certs-752315]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-752315 not found
	
	** /stderr **
	I1026 08:29:58.273591  243672 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:29:58.292127  243672 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c18b67b7e42d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:70:41:72:e4:6d} reservation:<nil>}
	I1026 08:29:58.292981  243672 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd6ed9f615a5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:78:96:65:8c:60} reservation:<nil>}
	I1026 08:29:58.293826  243672 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2a983bf4577 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:62:ae:31:43:82} reservation:<nil>}
	I1026 08:29:58.294541  243672 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-0bdb8ca3ba1e IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:6a:d3:4e:40:40:cb} reservation:<nil>}
	I1026 08:29:58.295520  243672 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-088475e82217 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:92:ff:71:51:2a:dc} reservation:<nil>}
	I1026 08:29:58.296105  243672 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-19bd044ce129 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:e6:e4:14:25:e8:21} reservation:<nil>}
	I1026 08:29:58.297194  243672 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f9d8f0}
	I1026 08:29:58.297224  243672 network_create.go:124] attempt to create docker network embed-certs-752315 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1026 08:29:58.297315  243672 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-752315 embed-certs-752315
	I1026 08:29:58.359311  243672 network_create.go:108] docker network embed-certs-752315 192.168.103.0/24 created
	I1026 08:29:58.359348  243672 kic.go:121] calculated static IP "192.168.103.2" for the "embed-certs-752315" container
	I1026 08:29:58.359405  243672 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 08:29:58.378404  243672 cli_runner.go:164] Run: docker volume create embed-certs-752315 --label name.minikube.sigs.k8s.io=embed-certs-752315 --label created_by.minikube.sigs.k8s.io=true
	I1026 08:29:58.397749  243672 oci.go:103] Successfully created a docker volume embed-certs-752315
	I1026 08:29:58.397825  243672 cli_runner.go:164] Run: docker run --rm --name embed-certs-752315-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-752315 --entrypoint /usr/bin/test -v embed-certs-752315:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 08:29:58.805461  243672 oci.go:107] Successfully prepared a docker volume embed-certs-752315
	I1026 08:29:58.805501  243672 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:29:58.805520  243672 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 08:29:58.805569  243672 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-752315:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 08:30:00.307653  237215 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.504827954s
	I1026 08:30:01.182740  237215 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.380068369s
	I1026 08:30:04.320327  237215 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 5.512721561s
	I1026 08:30:04.347509  237215 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:30:04.361318  237215 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:30:04.373393  237215 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:30:04.373713  237215 kubeadm.go:318] [mark-control-plane] Marking the node no-preload-001983 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:30:04.385945  237215 kubeadm.go:318] [bootstrap-token] Using token: mh2fwc.omoi87lc22q72qs9
	I1026 08:30:04.388408  237215 out.go:252]   - Configuring RBAC rules ...
	I1026 08:30:04.388669  237215 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	
	
	==> CRI-O <==
	Oct 26 08:29:52 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:52.10749746Z" level=info msg="Starting container: dcc554ece0bc475703b935bb2fbf58140ec9424c49b4a9a2340df77a5138824d" id=fc0106ae-969a-48fa-b6b5-72c36e5be4c6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:29:52 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:52.109431361Z" level=info msg="Started container" PID=2185 containerID=dcc554ece0bc475703b935bb2fbf58140ec9424c49b4a9a2340df77a5138824d description=kube-system/coredns-5dd5756b68-wrpqk/coredns id=fc0106ae-969a-48fa-b6b5-72c36e5be4c6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b6c4d7fbcccd22d2510bfdaa1f8fa8dd4961f1f9e3141339f737d6f5f3490a84
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.968277573Z" level=info msg="Running pod sandbox: default/busybox/POD" id=32d49e49-dd61-4357-84ba-96ff1101473b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.968391458Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.973595461Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:edf7029076f5d25c5c939722f137a2c602813e674b711d2b1b5fadc61c8a369e UID:c4b87aba-4af2-41ab-b0de-82f97987e1b5 NetNS:/var/run/netns/2bb4e757-c0fc-4768-bde6-26adac1fc289 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00069e958}] Aliases:map[]}"
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.973624916Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.985077533Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:edf7029076f5d25c5c939722f137a2c602813e674b711d2b1b5fadc61c8a369e UID:c4b87aba-4af2-41ab-b0de-82f97987e1b5 NetNS:/var/run/netns/2bb4e757-c0fc-4768-bde6-26adac1fc289 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00069e958}] Aliases:map[]}"
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.985207901Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.986064335Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.987297478Z" level=info msg="Ran pod sandbox edf7029076f5d25c5c939722f137a2c602813e674b711d2b1b5fadc61c8a369e with infra container: default/busybox/POD" id=32d49e49-dd61-4357-84ba-96ff1101473b name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.988907201Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=2abd4c85-115c-4152-a210-d5eb4b0aa09c name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.989070677Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=2abd4c85-115c-4152-a210-d5eb4b0aa09c name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.989127382Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=2abd4c85-115c-4152-a210-d5eb4b0aa09c name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.989828243Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6fd033e1-504e-4a29-844a-3478d43742ae name=/runtime.v1.ImageService/PullImage
	Oct 26 08:29:55 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:55.992062051Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 08:29:57 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:57.375518667Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=6fd033e1-504e-4a29-844a-3478d43742ae name=/runtime.v1.ImageService/PullImage
	Oct 26 08:29:57 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:57.376433565Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=eeed93d7-c456-46af-aa42-72c334d54cba name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:29:57 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:57.377786543Z" level=info msg="Creating container: default/busybox/busybox" id=76eb039f-d37c-4bb9-86b8-ce55dff98cdf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:29:57 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:57.377921318Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:29:57 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:57.382096196Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:29:57 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:57.382526173Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:29:57 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:57.417843403Z" level=info msg="Created container c9e0de3933689a9f8e57d18b4773ba31cc5387d8cb3da85ac93897884e3ccaba: default/busybox/busybox" id=76eb039f-d37c-4bb9-86b8-ce55dff98cdf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:29:57 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:57.418476401Z" level=info msg="Starting container: c9e0de3933689a9f8e57d18b4773ba31cc5387d8cb3da85ac93897884e3ccaba" id=181c924f-34f4-42e3-ad28-9d4460a62b44 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:29:57 old-k8s-version-810379 crio[778]: time="2025-10-26T08:29:57.420324578Z" level=info msg="Started container" PID=2262 containerID=c9e0de3933689a9f8e57d18b4773ba31cc5387d8cb3da85ac93897884e3ccaba description=default/busybox/busybox id=181c924f-34f4-42e3-ad28-9d4460a62b44 name=/runtime.v1.RuntimeService/StartContainer sandboxID=edf7029076f5d25c5c939722f137a2c602813e674b711d2b1b5fadc61c8a369e
	Oct 26 08:30:04 old-k8s-version-810379 crio[778]: time="2025-10-26T08:30:04.756626781Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	c9e0de3933689       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   edf7029076f5d       busybox                                          default
	dcc554ece0bc4       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 seconds ago      Running             coredns                   0                   b6c4d7fbcccd2       coredns-5dd5756b68-wrpqk                         kube-system
	609a2abffa610       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   333c8e1cc2f00       storage-provisioner                              kube-system
	8530004ada44a       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   02e79e89e5d87       kindnet-6mfc2                                    kube-system
	e8bef8eb052a0       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      26 seconds ago      Running             kube-proxy                0                   5417dd0f97bd3       kube-proxy-455nz                                 kube-system
	5fb705d88fec2       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   a6eb7f9c8c544       kube-apiserver-old-k8s-version-810379            kube-system
	725ac5eb48183       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   9f9cd3c441c05       kube-scheduler-old-k8s-version-810379            kube-system
	10c8d3efaac07       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   0ccd44536059b       etcd-old-k8s-version-810379                      kube-system
	a597f3ec04154       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   0b44ec73440b5       kube-controller-manager-old-k8s-version-810379   kube-system
	
	
	==> coredns [dcc554ece0bc475703b935bb2fbf58140ec9424c49b4a9a2340df77a5138824d] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34754 - 1711 "HINFO IN 3573564394330414751.3400545815108957472. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.029615841s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-810379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-810379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=old-k8s-version-810379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_29_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-810379
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:29:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:29:57 +0000   Sun, 26 Oct 2025 08:29:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:29:57 +0000   Sun, 26 Oct 2025 08:29:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:29:57 +0000   Sun, 26 Oct 2025 08:29:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:29:57 +0000   Sun, 26 Oct 2025 08:29:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-810379
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d265c90b-90d2-4c31-9d3f-ae5ff5d718c0
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-wrpqk                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-810379                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-6mfc2                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-810379             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-810379    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-455nz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-810379             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 26s                kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node old-k8s-version-810379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-810379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node old-k8s-version-810379 event: Registered Node old-k8s-version-810379 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-810379 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [10c8d3efaac0744f36d6f0563dd02fb57ec58f666e9516b85b4e14209ca99b10] <==
	{"level":"info","ts":"2025-10-26T08:29:20.805723Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 switched to configuration voters=(16125559238023404339)"}
	{"level":"info","ts":"2025-10-26T08:29:20.806172Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-10-26T08:29:20.809105Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-26T08:29:20.809375Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T08:29:20.809441Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T08:29:20.809544Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-26T08:29:20.809574Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-26T08:29:21.7972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 1"}
	{"level":"info","ts":"2025-10-26T08:29:21.797283Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 1"}
	{"level":"info","ts":"2025-10-26T08:29:21.797306Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 1"}
	{"level":"info","ts":"2025-10-26T08:29:21.797323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 2"}
	{"level":"info","ts":"2025-10-26T08:29:21.797329Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-10-26T08:29:21.797337Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-10-26T08:29:21.797344Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-10-26T08:29:21.79806Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T08:29:21.798676Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T08:29:21.798702Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T08:29:21.798673Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-810379 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T08:29:21.798914Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T08:29:21.79895Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-26T08:29:21.799019Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T08:29:21.799115Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T08:29:21.799144Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T08:29:21.799902Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-10-26T08:29:21.799929Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 08:30:06 up  1:12,  0 user,  load average: 3.71, 2.98, 1.85
	Linux old-k8s-version-810379 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8530004ada44a848d5e30c8d7f308220b3b4de0ccbffa1beb8ae7b61565981f1] <==
	I1026 08:29:40.995302       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:29:40.995581       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1026 08:29:40.995725       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:29:40.995743       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:29:40.995772       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:29:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:29:41.290851       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:29:41.290933       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:29:41.290946       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:29:41.291713       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:29:41.591458       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:29:41.591482       1 metrics.go:72] Registering metrics
	I1026 08:29:41.591549       1 controller.go:711] "Syncing nftables rules"
	I1026 08:29:51.291429       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:29:51.291476       1 main.go:301] handling current node
	I1026 08:30:01.292338       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:30:01.292401       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5fb705d88fec2de6af85fbd42f44ff570cf4eb9f8fd8df4d280e53590688d3d1] <==
	E1026 08:29:22.933818       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","system","node-high","leader-election","workload-high","workload-low","global-default","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E1026 08:29:22.942661       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","system","node-high","leader-election","catch-all","exempt","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E1026 08:29:22.952575       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","system","node-high","leader-election","workload-high","workload-low","global-default"] items=[{},{},{},{},{},{},{},{}]
	E1026 08:29:22.960825       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E1026 08:29:22.960950       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","catch-all","exempt","system","node-high","leader-election","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	I1026 08:29:23.082870       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:29:23.787493       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 08:29:23.792066       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 08:29:23.792159       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:29:24.291684       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:29:24.331051       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:29:24.392127       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 08:29:24.398411       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1026 08:29:24.399720       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 08:29:24.404064       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:29:24.834890       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 08:29:26.035482       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 08:29:26.046743       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 08:29:26.056471       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	E1026 08:29:32.882978       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	I1026 08:29:38.698515       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1026 08:29:38.848555       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	E1026 08:29:42.883498       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E1026 08:29:52.883751       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","global-default","catch-all","exempt","system","node-high","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E1026 08:30:02.884830       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","leader-election","workload-high","workload-low","global-default","catch-all","exempt","system"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [a597f3ec041547a35cde785ffeee4814bbfbe752deb437deae9b9f7f4523a4cb] <==
	I1026 08:29:38.168172       1 shared_informer.go:318] Caches are synced for TTL after finished
	I1026 08:29:38.191339       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 08:29:38.208710       1 shared_informer.go:318] Caches are synced for endpoint
	I1026 08:29:38.216015       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1026 08:29:38.247429       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 08:29:38.565539       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 08:29:38.641505       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 08:29:38.641529       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 08:29:38.709980       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1026 08:29:38.860685       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-455nz"
	I1026 08:29:38.864233       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6mfc2"
	I1026 08:29:39.052603       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-mz4h4"
	I1026 08:29:39.054899       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1026 08:29:39.058680       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-wrpqk"
	I1026 08:29:39.076623       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="367.397093ms"
	I1026 08:29:39.090705       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-mz4h4"
	I1026 08:29:39.107032       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="30.322749ms"
	I1026 08:29:39.116792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="9.66675ms"
	I1026 08:29:39.117118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="273.27µs"
	I1026 08:29:51.756532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="104.914µs"
	I1026 08:29:51.782288       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="83.195µs"
	I1026 08:29:52.206883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="123.746µs"
	I1026 08:29:52.976934       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1026 08:29:53.212993       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.622604ms"
	I1026 08:29:53.213106       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="65.983µs"
	
	
	==> kube-proxy [e8bef8eb052a0ae6aa5deb0c788486ab0a8c8dee3d075c1d479ad17e363c950e] <==
	I1026 08:29:39.323987       1 server_others.go:69] "Using iptables proxy"
	I1026 08:29:39.340921       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1026 08:29:39.371205       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:29:39.373507       1 server_others.go:152] "Using iptables Proxier"
	I1026 08:29:39.373541       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 08:29:39.373552       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 08:29:39.373588       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 08:29:39.373823       1 server.go:846] "Version info" version="v1.28.0"
	I1026 08:29:39.373843       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:29:39.374643       1 config.go:97] "Starting endpoint slice config controller"
	I1026 08:29:39.374683       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 08:29:39.375372       1 config.go:188] "Starting service config controller"
	I1026 08:29:39.375394       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 08:29:39.375892       1 config.go:315] "Starting node config controller"
	I1026 08:29:39.375912       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 08:29:39.476600       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1026 08:29:39.476626       1 shared_informer.go:318] Caches are synced for node config
	I1026 08:29:39.476607       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [725ac5eb48183743020611aac66c64b494ecb3d4429492c2e2af816920d4238a] <==
	W1026 08:29:22.842128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 08:29:22.842155       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1026 08:29:22.842182       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 08:29:22.842204       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 08:29:23.677569       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1026 08:29:23.677597       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1026 08:29:23.691032       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1026 08:29:23.691077       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1026 08:29:23.730800       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1026 08:29:23.730838       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1026 08:29:23.819431       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1026 08:29:23.819462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1026 08:29:23.931238       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1026 08:29:23.931366       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1026 08:29:23.942058       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1026 08:29:23.942109       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W1026 08:29:23.964590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1026 08:29:23.964630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1026 08:29:23.990148       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1026 08:29:23.990207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1026 08:29:23.991452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1026 08:29:23.991486       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1026 08:29:24.274582       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1026 08:29:24.274626       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1026 08:29:27.138184       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Oct 26 08:29:38 old-k8s-version-810379 kubelet[1403]: I1026 08:29:38.015418    1403 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 08:29:38 old-k8s-version-810379 kubelet[1403]: I1026 08:29:38.869542    1403 topology_manager.go:215] "Topology Admit Handler" podUID="89cbf0d8-1b3a-4388-9a19-6130b61b8271" podNamespace="kube-system" podName="kube-proxy-455nz"
	Oct 26 08:29:38 old-k8s-version-810379 kubelet[1403]: I1026 08:29:38.872647    1403 topology_manager.go:215] "Topology Admit Handler" podUID="f468c1c2-21f5-4491-86c7-1237c1299721" podNamespace="kube-system" podName="kindnet-6mfc2"
	Oct 26 08:29:38 old-k8s-version-810379 kubelet[1403]: I1026 08:29:38.884619    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89cbf0d8-1b3a-4388-9a19-6130b61b8271-lib-modules\") pod \"kube-proxy-455nz\" (UID: \"89cbf0d8-1b3a-4388-9a19-6130b61b8271\") " pod="kube-system/kube-proxy-455nz"
	Oct 26 08:29:38 old-k8s-version-810379 kubelet[1403]: I1026 08:29:38.884680    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f468c1c2-21f5-4491-86c7-1237c1299721-lib-modules\") pod \"kindnet-6mfc2\" (UID: \"f468c1c2-21f5-4491-86c7-1237c1299721\") " pod="kube-system/kindnet-6mfc2"
	Oct 26 08:29:38 old-k8s-version-810379 kubelet[1403]: I1026 08:29:38.884710    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f468c1c2-21f5-4491-86c7-1237c1299721-xtables-lock\") pod \"kindnet-6mfc2\" (UID: \"f468c1c2-21f5-4491-86c7-1237c1299721\") " pod="kube-system/kindnet-6mfc2"
	Oct 26 08:29:38 old-k8s-version-810379 kubelet[1403]: I1026 08:29:38.884745    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69dcf\" (UniqueName: \"kubernetes.io/projected/f468c1c2-21f5-4491-86c7-1237c1299721-kube-api-access-69dcf\") pod \"kindnet-6mfc2\" (UID: \"f468c1c2-21f5-4491-86c7-1237c1299721\") " pod="kube-system/kindnet-6mfc2"
	Oct 26 08:29:38 old-k8s-version-810379 kubelet[1403]: I1026 08:29:38.884784    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/89cbf0d8-1b3a-4388-9a19-6130b61b8271-kube-proxy\") pod \"kube-proxy-455nz\" (UID: \"89cbf0d8-1b3a-4388-9a19-6130b61b8271\") " pod="kube-system/kube-proxy-455nz"
	Oct 26 08:29:38 old-k8s-version-810379 kubelet[1403]: I1026 08:29:38.884814    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f468c1c2-21f5-4491-86c7-1237c1299721-cni-cfg\") pod \"kindnet-6mfc2\" (UID: \"f468c1c2-21f5-4491-86c7-1237c1299721\") " pod="kube-system/kindnet-6mfc2"
	Oct 26 08:29:38 old-k8s-version-810379 kubelet[1403]: I1026 08:29:38.884925    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89cbf0d8-1b3a-4388-9a19-6130b61b8271-xtables-lock\") pod \"kube-proxy-455nz\" (UID: \"89cbf0d8-1b3a-4388-9a19-6130b61b8271\") " pod="kube-system/kube-proxy-455nz"
	Oct 26 08:29:38 old-k8s-version-810379 kubelet[1403]: I1026 08:29:38.884972    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4z22\" (UniqueName: \"kubernetes.io/projected/89cbf0d8-1b3a-4388-9a19-6130b61b8271-kube-api-access-l4z22\") pod \"kube-proxy-455nz\" (UID: \"89cbf0d8-1b3a-4388-9a19-6130b61b8271\") " pod="kube-system/kube-proxy-455nz"
	Oct 26 08:29:41 old-k8s-version-810379 kubelet[1403]: I1026 08:29:41.175105    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-455nz" podStartSLOduration=3.175046101 podCreationTimestamp="2025-10-26 08:29:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:29:40.174860982 +0000 UTC m=+14.165371273" watchObservedRunningTime="2025-10-26 08:29:41.175046101 +0000 UTC m=+15.165556379"
	Oct 26 08:29:41 old-k8s-version-810379 kubelet[1403]: I1026 08:29:41.175460    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-6mfc2" podStartSLOduration=1.526210249 podCreationTimestamp="2025-10-26 08:29:38 +0000 UTC" firstStartedPulling="2025-10-26 08:29:39.188225403 +0000 UTC m=+13.178735671" lastFinishedPulling="2025-10-26 08:29:40.837423183 +0000 UTC m=+14.827933460" observedRunningTime="2025-10-26 08:29:41.175356375 +0000 UTC m=+15.165866657" watchObservedRunningTime="2025-10-26 08:29:41.175408038 +0000 UTC m=+15.165918318"
	Oct 26 08:29:51 old-k8s-version-810379 kubelet[1403]: I1026 08:29:51.723261    1403 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Oct 26 08:29:51 old-k8s-version-810379 kubelet[1403]: I1026 08:29:51.753472    1403 topology_manager.go:215] "Topology Admit Handler" podUID="0d8247bb-b952-4d45-9345-2f54d2a42b27" podNamespace="kube-system" podName="storage-provisioner"
	Oct 26 08:29:51 old-k8s-version-810379 kubelet[1403]: I1026 08:29:51.756739    1403 topology_manager.go:215] "Topology Admit Handler" podUID="52d85487-6b55-4451-8732-00bc722bbd41" podNamespace="kube-system" podName="coredns-5dd5756b68-wrpqk"
	Oct 26 08:29:51 old-k8s-version-810379 kubelet[1403]: I1026 08:29:51.882373    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7kpx\" (UniqueName: \"kubernetes.io/projected/0d8247bb-b952-4d45-9345-2f54d2a42b27-kube-api-access-n7kpx\") pod \"storage-provisioner\" (UID: \"0d8247bb-b952-4d45-9345-2f54d2a42b27\") " pod="kube-system/storage-provisioner"
	Oct 26 08:29:51 old-k8s-version-810379 kubelet[1403]: I1026 08:29:51.882421    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjqpg\" (UniqueName: \"kubernetes.io/projected/52d85487-6b55-4451-8732-00bc722bbd41-kube-api-access-qjqpg\") pod \"coredns-5dd5756b68-wrpqk\" (UID: \"52d85487-6b55-4451-8732-00bc722bbd41\") " pod="kube-system/coredns-5dd5756b68-wrpqk"
	Oct 26 08:29:51 old-k8s-version-810379 kubelet[1403]: I1026 08:29:51.882447    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/52d85487-6b55-4451-8732-00bc722bbd41-config-volume\") pod \"coredns-5dd5756b68-wrpqk\" (UID: \"52d85487-6b55-4451-8732-00bc722bbd41\") " pod="kube-system/coredns-5dd5756b68-wrpqk"
	Oct 26 08:29:51 old-k8s-version-810379 kubelet[1403]: I1026 08:29:51.882479    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0d8247bb-b952-4d45-9345-2f54d2a42b27-tmp\") pod \"storage-provisioner\" (UID: \"0d8247bb-b952-4d45-9345-2f54d2a42b27\") " pod="kube-system/storage-provisioner"
	Oct 26 08:29:52 old-k8s-version-810379 kubelet[1403]: I1026 08:29:52.197609    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.197558966 podCreationTimestamp="2025-10-26 08:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:29:52.197553654 +0000 UTC m=+26.188063935" watchObservedRunningTime="2025-10-26 08:29:52.197558966 +0000 UTC m=+26.188069244"
	Oct 26 08:29:52 old-k8s-version-810379 kubelet[1403]: I1026 08:29:52.206726    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-wrpqk" podStartSLOduration=13.206680306 podCreationTimestamp="2025-10-26 08:29:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:29:52.206494371 +0000 UTC m=+26.197004663" watchObservedRunningTime="2025-10-26 08:29:52.206680306 +0000 UTC m=+26.197190589"
	Oct 26 08:29:55 old-k8s-version-810379 kubelet[1403]: I1026 08:29:55.666798    1403 topology_manager.go:215] "Topology Admit Handler" podUID="c4b87aba-4af2-41ab-b0de-82f97987e1b5" podNamespace="default" podName="busybox"
	Oct 26 08:29:55 old-k8s-version-810379 kubelet[1403]: I1026 08:29:55.800654    1403 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm9cn\" (UniqueName: \"kubernetes.io/projected/c4b87aba-4af2-41ab-b0de-82f97987e1b5-kube-api-access-tm9cn\") pod \"busybox\" (UID: \"c4b87aba-4af2-41ab-b0de-82f97987e1b5\") " pod="default/busybox"
	Oct 26 08:29:58 old-k8s-version-810379 kubelet[1403]: I1026 08:29:58.214187    1403 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.8276807449999999 podCreationTimestamp="2025-10-26 08:29:55 +0000 UTC" firstStartedPulling="2025-10-26 08:29:55.989375517 +0000 UTC m=+29.979885792" lastFinishedPulling="2025-10-26 08:29:57.375840037 +0000 UTC m=+31.366350312" observedRunningTime="2025-10-26 08:29:58.213862295 +0000 UTC m=+32.204372576" watchObservedRunningTime="2025-10-26 08:29:58.214145265 +0000 UTC m=+32.204655545"
	
	
	==> storage-provisioner [609a2abffa6105a058dea245667a299a5da142b487b4bf1f7599e71b25e9d71d] <==
	I1026 08:29:52.118743       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 08:29:52.128437       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 08:29:52.128503       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 08:29:52.134827       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 08:29:52.134952       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ea5de82-4240-490f-8eb1-9a5d824d3381", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-810379_4cb842ce-afa9-4937-bcd4-667d27dba498 became leader
	I1026 08:29:52.135107       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-810379_4cb842ce-afa9-4937-bcd4-667d27dba498!
	I1026 08:29:52.235708       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-810379_4cb842ce-afa9-4937-bcd4-667d27dba498!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-810379 -n old-k8s-version-810379
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-810379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-001983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-001983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (381.290821ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:30:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-001983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-001983 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-001983 describe deploy/metrics-server -n kube-system: exit status 1 (73.806122ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-001983 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-001983
helpers_test.go:243: (dbg) docker inspect no-preload-001983:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6",
	        "Created": "2025-10-26T08:29:35.306793049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 237692,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:29:35.340894525Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6/hosts",
	        "LogPath": "/var/lib/docker/containers/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6-json.log",
	        "Name": "/no-preload-001983",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-001983:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-001983",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6",
	                "LowerDir": "/var/lib/docker/overlay2/635c7ae8fdcb97ab370d4b345349b0cab3ee9a001eb19ea34208ab5ebca1fde4-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/635c7ae8fdcb97ab370d4b345349b0cab3ee9a001eb19ea34208ab5ebca1fde4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/635c7ae8fdcb97ab370d4b345349b0cab3ee9a001eb19ea34208ab5ebca1fde4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/635c7ae8fdcb97ab370d4b345349b0cab3ee9a001eb19ea34208ab5ebca1fde4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-001983",
	                "Source": "/var/lib/docker/volumes/no-preload-001983/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-001983",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-001983",
	                "name.minikube.sigs.k8s.io": "no-preload-001983",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "4cc4095e3067c814455fcbfb3ed5428f365b224d56f2117e0bccffca04c07216",
	            "SandboxKey": "/var/run/docker/netns/4cc4095e3067",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33054"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33057"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33055"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33056"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-001983": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:e7:c0:cc:e4:cc",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0bdb8ca3ba1ed8384cb0d6339c847a03d4b5a80b703fdd60e4df4eb3b0fbcff7",
	                    "EndpointID": "bd1a815374d27a7486588080588e5ba7904d064e9da951cc369fe80b152a78a2",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-001983",
	                        "1c02a7265549"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-001983 -n no-preload-001983
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-001983 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-001983 logs -n 25: (1.015040778s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-110992 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo containerd config dump                                                                                                                                                                                                  │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo crio config                                                                                                                                                                                                             │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p cilium-110992                                                                                                                                                                                                                              │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ delete  │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p cert-expiration-535689 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-535689 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ ssh     │ -p NoKubernetes-815548 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p cert-expiration-535689                                                                                                                                                                                                                     │ cert-expiration-535689 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:30 UTC │
	│ stop    │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ ssh     │ -p NoKubernetes-815548 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-810379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p old-k8s-version-810379 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-810379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-001983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:30:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:30:23.328491  249498 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:30:23.328596  249498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:30:23.328608  249498 out.go:374] Setting ErrFile to fd 2...
	I1026 08:30:23.328614  249498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:30:23.328796  249498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:30:23.329281  249498 out.go:368] Setting JSON to false
	I1026 08:30:23.330740  249498 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4374,"bootTime":1761463049,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:30:23.330798  249498 start.go:141] virtualization: kvm guest
	I1026 08:30:23.333651  249498 out.go:179] * [old-k8s-version-810379] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:30:23.334914  249498 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:30:23.334948  249498 notify.go:220] Checking for updates...
	I1026 08:30:23.337301  249498 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:30:23.338523  249498 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:30:23.339746  249498 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:30:23.340905  249498 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:30:23.341950  249498 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:30:23.343404  249498 config.go:182] Loaded profile config "old-k8s-version-810379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 08:30:23.344884  249498 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1026 08:30:23.345823  249498 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:30:23.371070  249498 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:30:23.371157  249498 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:30:23.429744  249498 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:30:23.419592818 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:30:23.429851  249498 docker.go:318] overlay module found
	I1026 08:30:23.431373  249498 out.go:179] * Using the docker driver based on existing profile
	I1026 08:30:23.432333  249498 start.go:305] selected driver: docker
	I1026 08:30:23.432354  249498 start.go:925] validating driver "docker" against &{Name:old-k8s-version-810379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-810379 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:30:23.432463  249498 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:30:23.433287  249498 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:30:23.490841  249498 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:30:23.481164634 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:30:23.491111  249498 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:30:23.491149  249498 cni.go:84] Creating CNI manager for ""
	I1026 08:30:23.491194  249498 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:30:23.491229  249498 start.go:349] cluster config:
	{Name:old-k8s-version-810379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-810379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:30:23.493904  249498 out.go:179] * Starting "old-k8s-version-810379" primary control-plane node in "old-k8s-version-810379" cluster
	I1026 08:30:23.495041  249498 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:30:23.496069  249498 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:30:23.497328  249498 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 08:30:23.497377  249498 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1026 08:30:23.497387  249498 cache.go:58] Caching tarball of preloaded images
	I1026 08:30:23.497416  249498 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:30:23.497474  249498 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:30:23.497489  249498 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1026 08:30:23.497596  249498 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/config.json ...
	I1026 08:30:23.528650  249498 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:30:23.528672  249498 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:30:23.528695  249498 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:30:23.528726  249498 start.go:360] acquireMachinesLock for old-k8s-version-810379: {Name:mk1dce12657c26f87987fe3adf5e57eecaf35c8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:30:23.528818  249498 start.go:364] duration metric: took 68.534µs to acquireMachinesLock for "old-k8s-version-810379"
	I1026 08:30:23.528837  249498 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:30:23.528846  249498 fix.go:54] fixHost starting: 
	I1026 08:30:23.529136  249498 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:30:23.549775  249498 fix.go:112] recreateIfNeeded on old-k8s-version-810379: state=Stopped err=<nil>
	W1026 08:30:23.549806  249498 fix.go:138] unexpected machine state, will restart: <nil>
	W1026 08:30:20.801406  237215 node_ready.go:57] node "no-preload-001983" has "Ready":"False" status (will retry)
	I1026 08:30:22.801665  237215 node_ready.go:49] node "no-preload-001983" is "Ready"
	I1026 08:30:22.801697  237215 node_ready.go:38] duration metric: took 11.503293023s for node "no-preload-001983" to be "Ready" ...
	I1026 08:30:22.801715  237215 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:30:22.801773  237215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:30:22.817820  237215 api_server.go:72] duration metric: took 12.137927509s to wait for apiserver process to appear ...
	I1026 08:30:22.817852  237215 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:30:22.817878  237215 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 08:30:22.823463  237215 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 08:30:22.824503  237215 api_server.go:141] control plane version: v1.34.1
	I1026 08:30:22.824533  237215 api_server.go:131] duration metric: took 6.672822ms to wait for apiserver health ...
	I1026 08:30:22.824544  237215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:30:22.828881  237215 system_pods.go:59] 8 kube-system pods found
	I1026 08:30:22.828912  237215 system_pods.go:61] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Pending
	I1026 08:30:22.828921  237215 system_pods.go:61] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running
	I1026 08:30:22.828926  237215 system_pods.go:61] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:30:22.828932  237215 system_pods.go:61] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running
	I1026 08:30:22.828939  237215 system_pods.go:61] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running
	I1026 08:30:22.828943  237215 system_pods.go:61] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:30:22.828948  237215 system_pods.go:61] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running
	I1026 08:30:22.828952  237215 system_pods.go:61] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Pending
	I1026 08:30:22.828960  237215 system_pods.go:74] duration metric: took 4.409352ms to wait for pod list to return data ...
	I1026 08:30:22.828974  237215 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:30:22.831324  237215 default_sa.go:45] found service account: "default"
	I1026 08:30:22.831346  237215 default_sa.go:55] duration metric: took 2.365342ms for default service account to be created ...
	I1026 08:30:22.831357  237215 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:30:22.833797  237215 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:22.833822  237215 system_pods.go:89] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Pending
	I1026 08:30:22.833829  237215 system_pods.go:89] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running
	I1026 08:30:22.833834  237215 system_pods.go:89] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:30:22.833839  237215 system_pods.go:89] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running
	I1026 08:30:22.833846  237215 system_pods.go:89] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running
	I1026 08:30:22.833851  237215 system_pods.go:89] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:30:22.833856  237215 system_pods.go:89] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running
	I1026 08:30:22.833870  237215 system_pods.go:89] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:30:22.833896  237215 retry.go:31] will retry after 283.70781ms: missing components: kube-dns
	I1026 08:30:23.121760  237215 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:23.121803  237215 system_pods.go:89] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:30:23.121817  237215 system_pods.go:89] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running
	I1026 08:30:23.121826  237215 system_pods.go:89] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:30:23.121831  237215 system_pods.go:89] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running
	I1026 08:30:23.121837  237215 system_pods.go:89] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running
	I1026 08:30:23.121842  237215 system_pods.go:89] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:30:23.121847  237215 system_pods.go:89] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running
	I1026 08:30:23.121855  237215 system_pods.go:89] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:30:23.121873  237215 retry.go:31] will retry after 280.845246ms: missing components: kube-dns
	I1026 08:30:23.407603  237215 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:23.407644  237215 system_pods.go:89] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:30:23.407653  237215 system_pods.go:89] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running
	I1026 08:30:23.407661  237215 system_pods.go:89] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:30:23.407667  237215 system_pods.go:89] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running
	I1026 08:30:23.407673  237215 system_pods.go:89] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running
	I1026 08:30:23.407678  237215 system_pods.go:89] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:30:23.407683  237215 system_pods.go:89] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running
	I1026 08:30:23.407690  237215 system_pods.go:89] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:30:23.407709  237215 retry.go:31] will retry after 417.039624ms: missing components: kube-dns
	I1026 08:30:23.829091  237215 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:23.829125  237215 system_pods.go:89] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:30:23.829137  237215 system_pods.go:89] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running
	I1026 08:30:23.829144  237215 system_pods.go:89] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:30:23.829150  237215 system_pods.go:89] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running
	I1026 08:30:23.829159  237215 system_pods.go:89] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running
	I1026 08:30:23.829164  237215 system_pods.go:89] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:30:23.829173  237215 system_pods.go:89] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running
	I1026 08:30:23.829181  237215 system_pods.go:89] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:30:23.829202  237215 retry.go:31] will retry after 468.653678ms: missing components: kube-dns
	I1026 08:30:24.302584  237215 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:24.302613  237215 system_pods.go:89] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Running
	I1026 08:30:24.302621  237215 system_pods.go:89] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running
	I1026 08:30:24.302626  237215 system_pods.go:89] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:30:24.302640  237215 system_pods.go:89] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running
	I1026 08:30:24.302650  237215 system_pods.go:89] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running
	I1026 08:30:24.302655  237215 system_pods.go:89] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:30:24.302663  237215 system_pods.go:89] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running
	I1026 08:30:24.302669  237215 system_pods.go:89] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Running
	I1026 08:30:24.302681  237215 system_pods.go:126] duration metric: took 1.471317676s to wait for k8s-apps to be running ...
	I1026 08:30:24.302694  237215 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:30:24.302747  237215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:30:24.318011  237215 system_svc.go:56] duration metric: took 15.30736ms WaitForService to wait for kubelet
	I1026 08:30:24.318044  237215 kubeadm.go:586] duration metric: took 13.638159383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:30:24.318067  237215 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:30:24.322426  237215 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:30:24.322460  237215 node_conditions.go:123] node cpu capacity is 8
	I1026 08:30:24.322476  237215 node_conditions.go:105] duration metric: took 4.402583ms to run NodePressure ...
	I1026 08:30:24.322490  237215 start.go:241] waiting for startup goroutines ...
	I1026 08:30:24.322500  237215 start.go:246] waiting for cluster config update ...
	I1026 08:30:24.322514  237215 start.go:255] writing updated cluster config ...
	I1026 08:30:24.322837  237215 ssh_runner.go:195] Run: rm -f paused
	I1026 08:30:24.328840  237215 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:30:24.334466  237215 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p5nmq" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.340961  237215 pod_ready.go:94] pod "coredns-66bc5c9577-p5nmq" is "Ready"
	I1026 08:30:24.340989  237215 pod_ready.go:86] duration metric: took 6.492282ms for pod "coredns-66bc5c9577-p5nmq" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.343647  237215 pod_ready.go:83] waiting for pod "etcd-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.360165  237215 pod_ready.go:94] pod "etcd-no-preload-001983" is "Ready"
	I1026 08:30:24.360193  237215 pod_ready.go:86] duration metric: took 16.521025ms for pod "etcd-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.367589  237215 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.375428  237215 pod_ready.go:94] pod "kube-apiserver-no-preload-001983" is "Ready"
	I1026 08:30:24.375454  237215 pod_ready.go:86] duration metric: took 7.834025ms for pod "kube-apiserver-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.377919  237215 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:23.209500  243672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:30:23.708939  243672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:30:24.209116  243672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:30:24.709497  243672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:30:24.782049  243672 kubeadm.go:1113] duration metric: took 5.170440878s to wait for elevateKubeSystemPrivileges
	I1026 08:30:24.782084  243672 kubeadm.go:402] duration metric: took 16.146586455s to StartCluster
	I1026 08:30:24.782102  243672 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:30:24.782173  243672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:30:24.783886  243672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:30:24.784136  243672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 08:30:24.784149  243672 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:30:24.784204  243672 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-752315"
	I1026 08:30:24.784216  243672 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-752315"
	I1026 08:30:24.784234  243672 addons.go:69] Setting default-storageclass=true in profile "embed-certs-752315"
	I1026 08:30:24.784134  243672 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:30:24.784236  243672 host.go:66] Checking if "embed-certs-752315" exists ...
	I1026 08:30:24.784299  243672 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-752315"
	I1026 08:30:24.784383  243672 config.go:182] Loaded profile config "embed-certs-752315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:30:24.784837  243672 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:30:24.784938  243672 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:30:24.785990  243672 out.go:179] * Verifying Kubernetes components...
	I1026 08:30:24.790839  243672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:30:24.809086  243672 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:30:24.810291  243672 addons.go:238] Setting addon default-storageclass=true in "embed-certs-752315"
	I1026 08:30:24.810343  243672 host.go:66] Checking if "embed-certs-752315" exists ...
	I1026 08:30:24.810444  243672 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:30:24.810476  243672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:30:24.810560  243672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:30:24.810821  243672 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:30:24.842285  243672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:30:24.852371  243672 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:30:24.852396  243672 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:30:24.852460  243672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:30:24.877819  243672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:30:24.887561  243672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 08:30:24.962140  243672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:30:24.969198  243672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:30:24.992735  243672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:30:25.074632  243672 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1026 08:30:25.075877  243672 node_ready.go:35] waiting up to 6m0s for node "embed-certs-752315" to be "Ready" ...
	I1026 08:30:25.274726  243672 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 08:30:24.733632  237215 pod_ready.go:94] pod "kube-controller-manager-no-preload-001983" is "Ready"
	I1026 08:30:24.733675  237215 pod_ready.go:86] duration metric: took 355.730033ms for pod "kube-controller-manager-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.933591  237215 pod_ready.go:83] waiting for pod "kube-proxy-xpz59" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:25.333116  237215 pod_ready.go:94] pod "kube-proxy-xpz59" is "Ready"
	I1026 08:30:25.333146  237215 pod_ready.go:86] duration metric: took 399.525389ms for pod "kube-proxy-xpz59" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:25.533642  237215 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:25.932924  237215 pod_ready.go:94] pod "kube-scheduler-no-preload-001983" is "Ready"
	I1026 08:30:25.932951  237215 pod_ready.go:86] duration metric: took 399.286366ms for pod "kube-scheduler-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:25.932963  237215 pod_ready.go:40] duration metric: took 1.60408757s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:30:25.974939  237215 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:30:25.976721  237215 out.go:179] * Done! kubectl is now configured to use "no-preload-001983" cluster and "default" namespace by default
	I1026 08:30:23.930322  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:30:23.930800  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:30:23.930856  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:30:23.930913  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:30:23.962068  204716 cri.go:89] found id: "20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:23.962100  204716 cri.go:89] found id: ""
	I1026 08:30:23.962109  204716 logs.go:282] 1 containers: [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca]
	I1026 08:30:23.962168  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:23.968002  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:30:23.968082  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:30:23.994978  204716 cri.go:89] found id: ""
	I1026 08:30:23.995003  204716 logs.go:282] 0 containers: []
	W1026 08:30:23.995013  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:30:23.995019  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:30:23.995097  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:30:24.024197  204716 cri.go:89] found id: ""
	I1026 08:30:24.024225  204716 logs.go:282] 0 containers: []
	W1026 08:30:24.024236  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:30:24.024243  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:30:24.024334  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:30:24.058660  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:24.058695  204716 cri.go:89] found id: ""
	I1026 08:30:24.058705  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:30:24.058772  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:24.063970  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:30:24.064040  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:30:24.098462  204716 cri.go:89] found id: ""
	I1026 08:30:24.098510  204716 logs.go:282] 0 containers: []
	W1026 08:30:24.098522  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:30:24.098542  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:30:24.098607  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:30:24.134049  204716 cri.go:89] found id: "ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:24.134075  204716 cri.go:89] found id: ""
	I1026 08:30:24.134084  204716 logs.go:282] 1 containers: [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03]
	I1026 08:30:24.134143  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:24.139649  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:30:24.139729  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:30:24.171850  204716 cri.go:89] found id: ""
	I1026 08:30:24.171878  204716 logs.go:282] 0 containers: []
	W1026 08:30:24.171888  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:30:24.171895  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:30:24.171946  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:30:24.200191  204716 cri.go:89] found id: ""
	I1026 08:30:24.200220  204716 logs.go:282] 0 containers: []
	W1026 08:30:24.200231  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:30:24.200241  204716 logs.go:123] Gathering logs for kube-apiserver [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca] ...
	I1026 08:30:24.200275  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:24.241266  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:30:24.241309  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:24.298376  204716 logs.go:123] Gathering logs for kube-controller-manager [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03] ...
	I1026 08:30:24.298415  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:24.333522  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:30:24.333552  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:30:24.396435  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:30:24.396466  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:30:24.429267  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:30:24.429296  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:30:24.530643  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:30:24.530676  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:30:24.545458  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:30:24.545482  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:30:24.612228  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:30:25.276047  243672 addons.go:514] duration metric: took 491.893443ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 08:30:25.578814  243672 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-752315" context rescaled to 1 replicas
	W1026 08:30:27.078919  243672 node_ready.go:57] node "embed-certs-752315" has "Ready":"False" status (will retry)
	I1026 08:30:23.551285  249498 out.go:252] * Restarting existing docker container for "old-k8s-version-810379" ...
	I1026 08:30:23.551364  249498 cli_runner.go:164] Run: docker start old-k8s-version-810379
	I1026 08:30:23.808611  249498 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:30:23.828677  249498 kic.go:430] container "old-k8s-version-810379" state is running.
	I1026 08:30:23.829671  249498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-810379
	I1026 08:30:23.849639  249498 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/config.json ...
	I1026 08:30:23.849935  249498 machine.go:93] provisionDockerMachine start ...
	I1026 08:30:23.850010  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:23.869444  249498 main.go:141] libmachine: Using SSH client type: native
	I1026 08:30:23.869757  249498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1026 08:30:23.869774  249498 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:30:23.870362  249498 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58306->127.0.0.1:33068: read: connection reset by peer
	I1026 08:30:27.013393  249498 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-810379
	
	I1026 08:30:27.013420  249498 ubuntu.go:182] provisioning hostname "old-k8s-version-810379"
	I1026 08:30:27.013482  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:27.031971  249498 main.go:141] libmachine: Using SSH client type: native
	I1026 08:30:27.032182  249498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1026 08:30:27.032199  249498 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-810379 && echo "old-k8s-version-810379" | sudo tee /etc/hostname
	I1026 08:30:27.185017  249498 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-810379
	
	I1026 08:30:27.185109  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:27.205401  249498 main.go:141] libmachine: Using SSH client type: native
	I1026 08:30:27.205646  249498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1026 08:30:27.205666  249498 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-810379' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-810379/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-810379' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:30:27.353363  249498 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:30:27.353394  249498 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:30:27.353447  249498 ubuntu.go:190] setting up certificates
	I1026 08:30:27.353469  249498 provision.go:84] configureAuth start
	I1026 08:30:27.353549  249498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-810379
	I1026 08:30:27.372870  249498 provision.go:143] copyHostCerts
	I1026 08:30:27.372948  249498 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:30:27.372969  249498 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:30:27.373067  249498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:30:27.373276  249498 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:30:27.373292  249498 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:30:27.373349  249498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:30:27.373467  249498 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:30:27.373478  249498 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:30:27.373517  249498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:30:27.373597  249498 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-810379 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-810379]
	I1026 08:30:27.511967  249498 provision.go:177] copyRemoteCerts
	I1026 08:30:27.512025  249498 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:30:27.512082  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:27.531635  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:27.636937  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:30:27.661354  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 08:30:27.683151  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:30:27.705469  249498 provision.go:87] duration metric: took 351.983612ms to configureAuth
	I1026 08:30:27.705498  249498 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:30:27.705693  249498 config.go:182] Loaded profile config "old-k8s-version-810379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 08:30:27.705805  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:27.729097  249498 main.go:141] libmachine: Using SSH client type: native
	I1026 08:30:27.729428  249498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1026 08:30:27.729450  249498 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:30:28.028857  249498 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:30:28.028888  249498 machine.go:96] duration metric: took 4.178934587s to provisionDockerMachine
	I1026 08:30:28.028900  249498 start.go:293] postStartSetup for "old-k8s-version-810379" (driver="docker")
	I1026 08:30:28.028913  249498 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:30:28.028973  249498 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:30:28.029029  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:28.049661  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:28.153231  249498 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:30:28.157275  249498 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:30:28.157318  249498 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:30:28.157350  249498 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:30:28.157414  249498 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:30:28.157501  249498 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:30:28.157607  249498 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:30:28.166071  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:30:28.184177  249498 start.go:296] duration metric: took 155.264583ms for postStartSetup
	I1026 08:30:28.184245  249498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:30:28.184339  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:28.202827  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:28.301554  249498 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:30:28.306473  249498 fix.go:56] duration metric: took 4.777621578s for fixHost
	I1026 08:30:28.306500  249498 start.go:83] releasing machines lock for "old-k8s-version-810379", held for 4.777670293s
	I1026 08:30:28.306596  249498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-810379
	I1026 08:30:28.324300  249498 ssh_runner.go:195] Run: cat /version.json
	I1026 08:30:28.324334  249498 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:30:28.324391  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:28.324394  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:28.344594  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:28.344648  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:28.496027  249498 ssh_runner.go:195] Run: systemctl --version
	I1026 08:30:28.502785  249498 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:30:28.538472  249498 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:30:28.543798  249498 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:30:28.543869  249498 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:30:28.551913  249498 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:30:28.551942  249498 start.go:495] detecting cgroup driver to use...
	I1026 08:30:28.551969  249498 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:30:28.552002  249498 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:30:28.566520  249498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:30:28.580058  249498 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:30:28.580106  249498 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:30:28.595318  249498 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:30:28.608925  249498 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:30:28.700099  249498 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:30:28.782716  249498 docker.go:234] disabling docker service ...
	I1026 08:30:28.782781  249498 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:30:28.798127  249498 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:30:28.811453  249498 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:30:28.894027  249498 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:30:28.979322  249498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:30:28.992859  249498 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:30:29.008365  249498 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 08:30:29.008424  249498 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.017848  249498 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:30:29.017909  249498 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.027055  249498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.035824  249498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.045507  249498 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:30:29.054429  249498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.063775  249498 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.072842  249498 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.081862  249498 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:30:29.089840  249498 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:30:29.097681  249498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:30:29.179147  249498 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:30:29.296110  249498 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:30:29.296188  249498 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:30:29.300363  249498 start.go:563] Will wait 60s for crictl version
	I1026 08:30:29.300419  249498 ssh_runner.go:195] Run: which crictl
	I1026 08:30:29.303931  249498 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:30:29.327658  249498 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:30:29.327722  249498 ssh_runner.go:195] Run: crio --version
	I1026 08:30:29.355701  249498 ssh_runner.go:195] Run: crio --version
	I1026 08:30:29.386582  249498 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1026 08:30:29.387854  249498 cli_runner.go:164] Run: docker network inspect old-k8s-version-810379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:30:29.405702  249498 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1026 08:30:29.409951  249498 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:30:29.420434  249498 kubeadm.go:883] updating cluster {Name:old-k8s-version-810379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-810379 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:30:29.420643  249498 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 08:30:29.420719  249498 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:30:29.451475  249498 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:30:29.451495  249498 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:30:29.451538  249498 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:30:29.478576  249498 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:30:29.478595  249498 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:30:29.478602  249498 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1026 08:30:29.478691  249498 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-810379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-810379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:30:29.478764  249498 ssh_runner.go:195] Run: crio config
	I1026 08:30:29.525729  249498 cni.go:84] Creating CNI manager for ""
	I1026 08:30:29.525752  249498 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:30:29.525770  249498 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:30:29.525791  249498 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-810379 NodeName:old-k8s-version-810379 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:30:29.525922  249498 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-810379"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:30:29.525991  249498 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1026 08:30:29.534809  249498 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:30:29.534902  249498 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:30:29.542723  249498 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1026 08:30:29.555437  249498 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:30:29.569001  249498 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1026 08:30:29.582829  249498 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:30:29.586871  249498 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:30:29.597675  249498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:30:29.678919  249498 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:30:29.706869  249498 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379 for IP: 192.168.94.2
	I1026 08:30:29.706895  249498 certs.go:195] generating shared ca certs ...
	I1026 08:30:29.706915  249498 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:30:29.707062  249498 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:30:29.707121  249498 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:30:29.707136  249498 certs.go:257] generating profile certs ...
	I1026 08:30:29.707279  249498 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.key
	I1026 08:30:29.707366  249498 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/apiserver.key.328ea5c9
	I1026 08:30:29.707446  249498 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/proxy-client.key
	I1026 08:30:29.707578  249498 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:30:29.707619  249498 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:30:29.707633  249498 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:30:29.707669  249498 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:30:29.707699  249498 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:30:29.707730  249498 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:30:29.707787  249498 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:30:29.708400  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:30:29.729519  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:30:29.750437  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:30:29.771300  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:30:29.793748  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 08:30:29.813900  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 08:30:29.831697  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:30:29.850380  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:30:29.869157  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:30:29.888913  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:30:29.908456  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:30:29.926353  249498 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:30:29.939552  249498 ssh_runner.go:195] Run: openssl version
	I1026 08:30:29.945543  249498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:30:29.953854  249498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:30:29.957619  249498 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:30:29.957685  249498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:30:29.992786  249498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:30:30.001640  249498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:30:30.011033  249498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:30:30.015101  249498 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:30:30.015171  249498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:30:30.049564  249498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:30:30.058211  249498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:30:30.066886  249498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:30:30.070988  249498 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:30:30.071048  249498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:30:30.107030  249498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:30:30.115653  249498 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:30:30.119654  249498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:30:30.155836  249498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:30:30.192626  249498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:30:30.236902  249498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:30:30.280228  249498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:30:30.335434  249498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:30:30.394177  249498 kubeadm.go:400] StartCluster: {Name:old-k8s-version-810379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-810379 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:30:30.394301  249498 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:30:30.394373  249498 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:30:30.431785  249498 cri.go:89] found id: "05c780d0419bff37382e6fa31430690a2e55479d8bdba3e10b0e53207ce9c8ea"
	I1026 08:30:30.431809  249498 cri.go:89] found id: "91140716b117cb4eb2f3c6e149ff401f7197babd90f5e046ace64b14ed25aded"
	I1026 08:30:30.431814  249498 cri.go:89] found id: "8d811096167c839c4c04054b21e24c64ba17901168426c75d4408c4ce49c4503"
	I1026 08:30:30.431819  249498 cri.go:89] found id: "b4b1d14a54456f07311716e84e6ac70140f03e1a062261a56e0d6dd936819cec"
	I1026 08:30:30.431832  249498 cri.go:89] found id: ""
	I1026 08:30:30.431901  249498 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 08:30:30.444674  249498 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:30:30Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:30:30.444731  249498 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:30:30.455736  249498 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:30:30.455756  249498 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:30:30.455809  249498 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:30:30.466118  249498 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:30:30.467601  249498 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-810379" does not appear in /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:30:30.468549  249498 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-9429/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-810379" cluster setting kubeconfig missing "old-k8s-version-810379" context setting]
	I1026 08:30:30.469810  249498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:30:30.471791  249498 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:30:30.480232  249498 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1026 08:30:30.480277  249498 kubeadm.go:601] duration metric: took 24.51406ms to restartPrimaryControlPlane
	I1026 08:30:30.480362  249498 kubeadm.go:402] duration metric: took 86.122712ms to StartCluster
	I1026 08:30:30.480405  249498 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:30:30.480489  249498 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:30:30.482592  249498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:30:30.482858  249498 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:30:30.482932  249498 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:30:30.483052  249498 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-810379"
	I1026 08:30:30.483073  249498 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-810379"
	W1026 08:30:30.483083  249498 addons.go:247] addon storage-provisioner should already be in state true
	I1026 08:30:30.483097  249498 config.go:182] Loaded profile config "old-k8s-version-810379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 08:30:30.483115  249498 host.go:66] Checking if "old-k8s-version-810379" exists ...
	I1026 08:30:30.483153  249498 addons.go:69] Setting dashboard=true in profile "old-k8s-version-810379"
	I1026 08:30:30.483167  249498 addons.go:238] Setting addon dashboard=true in "old-k8s-version-810379"
	W1026 08:30:30.483172  249498 addons.go:247] addon dashboard should already be in state true
	I1026 08:30:30.483197  249498 host.go:66] Checking if "old-k8s-version-810379" exists ...
	I1026 08:30:30.483522  249498 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-810379"
	I1026 08:30:30.483551  249498 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-810379"
	I1026 08:30:30.483660  249498 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:30:30.483676  249498 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:30:30.483843  249498 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:30:30.486587  249498 out.go:179] * Verifying Kubernetes components...
	I1026 08:30:30.488493  249498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:30:30.512219  249498 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-810379"
	W1026 08:30:30.512240  249498 addons.go:247] addon default-storageclass should already be in state true
	I1026 08:30:30.512285  249498 host.go:66] Checking if "old-k8s-version-810379" exists ...
	I1026 08:30:30.512322  249498 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:30:30.512742  249498 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:30:30.513682  249498 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:30:30.513700  249498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:30:30.513755  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:30.516802  249498 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 08:30:30.518476  249498 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 08:30:27.112886  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:30:27.113391  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:30:27.113449  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:30:27.113507  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:30:27.143161  204716 cri.go:89] found id: "20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:27.143184  204716 cri.go:89] found id: ""
	I1026 08:30:27.143194  204716 logs.go:282] 1 containers: [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca]
	I1026 08:30:27.143274  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:27.147202  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:30:27.147300  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:30:27.174986  204716 cri.go:89] found id: ""
	I1026 08:30:27.175020  204716 logs.go:282] 0 containers: []
	W1026 08:30:27.175036  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:30:27.175043  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:30:27.175101  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:30:27.203936  204716 cri.go:89] found id: ""
	I1026 08:30:27.203961  204716 logs.go:282] 0 containers: []
	W1026 08:30:27.203971  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:30:27.203978  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:30:27.204032  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:30:27.235039  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:27.235065  204716 cri.go:89] found id: ""
	I1026 08:30:27.235074  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:30:27.235142  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:27.239257  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:30:27.239336  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:30:27.268111  204716 cri.go:89] found id: ""
	I1026 08:30:27.268138  204716 logs.go:282] 0 containers: []
	W1026 08:30:27.268190  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:30:27.268204  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:30:27.268281  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:30:27.296081  204716 cri.go:89] found id: "ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:27.296102  204716 cri.go:89] found id: ""
	I1026 08:30:27.296109  204716 logs.go:282] 1 containers: [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03]
	I1026 08:30:27.296164  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:27.300261  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:30:27.300342  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:30:27.330207  204716 cri.go:89] found id: ""
	I1026 08:30:27.330232  204716 logs.go:282] 0 containers: []
	W1026 08:30:27.330240  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:30:27.330258  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:30:27.330315  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:30:27.358641  204716 cri.go:89] found id: ""
	I1026 08:30:27.358666  204716 logs.go:282] 0 containers: []
	W1026 08:30:27.358676  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:30:27.358686  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:30:27.358701  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:30:27.390938  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:30:27.390966  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:30:27.478708  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:30:27.478739  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:30:27.493848  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:30:27.493880  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:30:27.555063  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:30:27.555084  204716 logs.go:123] Gathering logs for kube-apiserver [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca] ...
	I1026 08:30:27.555104  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:27.589735  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:30:27.589762  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:27.661497  204716 logs.go:123] Gathering logs for kube-controller-manager [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03] ...
	I1026 08:30:27.661534  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:27.693080  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:30:27.693116  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:30:30.253042  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:30:30.253454  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:30:30.253513  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:30:30.253596  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:30:30.284393  204716 cri.go:89] found id: "20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:30.284420  204716 cri.go:89] found id: ""
	I1026 08:30:30.284430  204716 logs.go:282] 1 containers: [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca]
	I1026 08:30:30.284486  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:30.289384  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:30:30.289454  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:30:30.323557  204716 cri.go:89] found id: ""
	I1026 08:30:30.323585  204716 logs.go:282] 0 containers: []
	W1026 08:30:30.323595  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:30:30.323603  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:30:30.323664  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:30:30.360123  204716 cri.go:89] found id: ""
	I1026 08:30:30.360152  204716 logs.go:282] 0 containers: []
	W1026 08:30:30.360161  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:30:30.360169  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:30:30.360298  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:30:30.395314  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:30.395337  204716 cri.go:89] found id: ""
	I1026 08:30:30.395348  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:30:30.395405  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:30.399431  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:30:30.399502  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:30:30.429438  204716 cri.go:89] found id: ""
	I1026 08:30:30.429465  204716 logs.go:282] 0 containers: []
	W1026 08:30:30.429476  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:30:30.429484  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:30:30.429544  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:30:30.463904  204716 cri.go:89] found id: "ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:30.463926  204716 cri.go:89] found id: ""
	I1026 08:30:30.463936  204716 logs.go:282] 1 containers: [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03]
	I1026 08:30:30.463991  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:30.468757  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:30:30.468830  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:30:30.511864  204716 cri.go:89] found id: ""
	I1026 08:30:30.511895  204716 logs.go:282] 0 containers: []
	W1026 08:30:30.511905  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:30:30.511913  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:30:30.511963  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:30:30.562529  204716 cri.go:89] found id: ""
	I1026 08:30:30.562557  204716 logs.go:282] 0 containers: []
	W1026 08:30:30.562567  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:30:30.562577  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:30:30.562598  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:30:30.651627  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:30:30.651649  204716 logs.go:123] Gathering logs for kube-apiserver [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca] ...
	I1026 08:30:30.651669  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:30.691299  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:30:30.691327  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:30.757591  204716 logs.go:123] Gathering logs for kube-controller-manager [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03] ...
	I1026 08:30:30.757640  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:30.796518  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:30:30.796555  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:30:30.852003  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:30:30.852033  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:30:30.885204  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:30:30.885243  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:30:31.009602  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:30:31.009649  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1026 08:30:29.079478  243672 node_ready.go:57] node "embed-certs-752315" has "Ready":"False" status (will retry)
	W1026 08:30:31.079785  243672 node_ready.go:57] node "embed-certs-752315" has "Ready":"False" status (will retry)
	I1026 08:30:30.519753  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 08:30:30.519772  249498 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 08:30:30.519844  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:30.547359  249498 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:30:30.547385  249498 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:30:30.547451  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:30.557519  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:30.558540  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:30.586596  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:30.663447  249498 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:30:30.683017  249498 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-810379" to be "Ready" ...
	I1026 08:30:30.687369  249498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:30:30.688408  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 08:30:30.688437  249498 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 08:30:30.705463  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 08:30:30.705481  249498 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 08:30:30.711293  249498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:30:30.722240  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 08:30:30.722306  249498 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 08:30:30.741407  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 08:30:30.741430  249498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 08:30:30.759917  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 08:30:30.759951  249498 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 08:30:30.780977  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 08:30:30.781004  249498 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 08:30:30.800650  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 08:30:30.800674  249498 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 08:30:30.817828  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 08:30:30.817857  249498 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 08:30:30.831510  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 08:30:30.831534  249498 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 08:30:30.844346  249498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 08:30:32.713748  249498 node_ready.go:49] node "old-k8s-version-810379" is "Ready"
	I1026 08:30:32.713786  249498 node_ready.go:38] duration metric: took 2.030730112s for node "old-k8s-version-810379" to be "Ready" ...
	I1026 08:30:32.713802  249498 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:30:32.713854  249498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:30:33.485025  249498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.797581591s)
	I1026 08:30:33.485131  249498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.773803709s)
	I1026 08:30:33.928488  249498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.084082268s)
	I1026 08:30:33.928520  249498 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.214641751s)
	I1026 08:30:33.928549  249498 api_server.go:72] duration metric: took 3.445662761s to wait for apiserver process to appear ...
	I1026 08:30:33.928557  249498 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:30:33.928576  249498 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1026 08:30:33.930418  249498 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-810379 addons enable metrics-server
	
	I1026 08:30:33.931890  249498 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	
	
	==> CRI-O <==
	Oct 26 08:30:23 no-preload-001983 crio[766]: time="2025-10-26T08:30:23.172158368Z" level=info msg="Starting container: 197d1a704a4f09a968f5ae4e3bdd4fb5264e7a8a02bcd293a91013b5ab5bd701" id=4a5c4837-5005-4646-bff7-1adf0e115004 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:30:23 no-preload-001983 crio[766]: time="2025-10-26T08:30:23.174560536Z" level=info msg="Started container" PID=2906 containerID=197d1a704a4f09a968f5ae4e3bdd4fb5264e7a8a02bcd293a91013b5ab5bd701 description=kube-system/coredns-66bc5c9577-p5nmq/coredns id=4a5c4837-5005-4646-bff7-1adf0e115004 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aea8eefab33ec915c792b78129b4470ef6b50aca4969fed6044f2a2f00d6cc22
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.437955116Z" level=info msg="Running pod sandbox: default/busybox/POD" id=12c78475-164f-4095-a334-a3e2cb64b73e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.438064491Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.442639558Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:01b53986410f16b527e0691c10c51a24fe4d7d1bec39ce149a72b098c9db1cc1 UID:3eb3e11d-988f-48b0-a678-67f786b283c9 NetNS:/var/run/netns/13a610dc-872a-414d-80fc-9533d7b048a7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e1cca8}] Aliases:map[]}"
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.442676885Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.452792898Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:01b53986410f16b527e0691c10c51a24fe4d7d1bec39ce149a72b098c9db1cc1 UID:3eb3e11d-988f-48b0-a678-67f786b283c9 NetNS:/var/run/netns/13a610dc-872a-414d-80fc-9533d7b048a7 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000e1cca8}] Aliases:map[]}"
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.452933069Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.453737303Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.454603332Z" level=info msg="Ran pod sandbox 01b53986410f16b527e0691c10c51a24fe4d7d1bec39ce149a72b098c9db1cc1 with infra container: default/busybox/POD" id=12c78475-164f-4095-a334-a3e2cb64b73e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.456002548Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0535d3e4-1066-4b0f-bcf9-1ba24c325143 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.456124082Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0535d3e4-1066-4b0f-bcf9-1ba24c325143 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.456155496Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0535d3e4-1066-4b0f-bcf9-1ba24c325143 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.456705252Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0b12dab3-2313-453c-9f06-3d35a33f57c4 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:30:26 no-preload-001983 crio[766]: time="2025-10-26T08:30:26.459483841Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 08:30:27 no-preload-001983 crio[766]: time="2025-10-26T08:30:27.768173027Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0b12dab3-2313-453c-9f06-3d35a33f57c4 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:30:27 no-preload-001983 crio[766]: time="2025-10-26T08:30:27.768820886Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=e9cf0ae9-4f0f-4ad2-8c29-89cc8baaa3c6 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:30:27 no-preload-001983 crio[766]: time="2025-10-26T08:30:27.770245254Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=8f2e49aa-bc1f-455b-9641-587ce491ee59 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:30:27 no-preload-001983 crio[766]: time="2025-10-26T08:30:27.773545053Z" level=info msg="Creating container: default/busybox/busybox" id=11a98d7a-36b5-415e-b29f-f918d63daa33 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:30:27 no-preload-001983 crio[766]: time="2025-10-26T08:30:27.773695703Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:30:27 no-preload-001983 crio[766]: time="2025-10-26T08:30:27.778461211Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:30:27 no-preload-001983 crio[766]: time="2025-10-26T08:30:27.7788788Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:30:27 no-preload-001983 crio[766]: time="2025-10-26T08:30:27.802038998Z" level=info msg="Created container de7864c5ef7b95a2bbb5c745f253eebe3bd5ce3397fcdfe5210d6cdc1750765a: default/busybox/busybox" id=11a98d7a-36b5-415e-b29f-f918d63daa33 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:30:27 no-preload-001983 crio[766]: time="2025-10-26T08:30:27.802635697Z" level=info msg="Starting container: de7864c5ef7b95a2bbb5c745f253eebe3bd5ce3397fcdfe5210d6cdc1750765a" id=123388d0-152a-450a-980a-16a5f420df51 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:30:27 no-preload-001983 crio[766]: time="2025-10-26T08:30:27.804231973Z" level=info msg="Started container" PID=2980 containerID=de7864c5ef7b95a2bbb5c745f253eebe3bd5ce3397fcdfe5210d6cdc1750765a description=default/busybox/busybox id=123388d0-152a-450a-980a-16a5f420df51 name=/runtime.v1.RuntimeService/StartContainer sandboxID=01b53986410f16b527e0691c10c51a24fe4d7d1bec39ce149a72b098c9db1cc1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	de7864c5ef7b9       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   01b53986410f1       busybox                                     default
	197d1a704a4f0       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   aea8eefab33ec       coredns-66bc5c9577-p5nmq                    kube-system
	2748d4150d130       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   acc9ba3505c8c       storage-provisioner                         kube-system
	58c44114f0a17       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   bd94eb0d94fea       kindnet-8lrm6                               kube-system
	dcdd517d9274a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   fcb592ae25c13       kube-proxy-xpz59                            kube-system
	ffc27de2ebd9d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   95a28660597ba       kube-apiserver-no-preload-001983            kube-system
	704c749866a63       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   a40b44b2d92b9       kube-controller-manager-no-preload-001983   kube-system
	6270e8690acce       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   21f84116951cc       kube-scheduler-no-preload-001983            kube-system
	be1ff1afb13e0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   7c137c79b6dc0       etcd-no-preload-001983                      kube-system
	
	
	==> coredns [197d1a704a4f09a968f5ae4e3bdd4fb5264e7a8a02bcd293a91013b5ab5bd701] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47222 - 59397 "HINFO IN 8751656605340378157.4615515908154824140. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.018350884s
	
	
	==> describe nodes <==
	Name:               no-preload-001983
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-001983
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=no-preload-001983
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_30_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:30:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-001983
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:30:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:30:22 +0000   Sun, 26 Oct 2025 08:29:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:30:22 +0000   Sun, 26 Oct 2025 08:29:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:30:22 +0000   Sun, 26 Oct 2025 08:29:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:30:22 +0000   Sun, 26 Oct 2025 08:30:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-001983
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                0d1d1615-c76d-4158-8917-674a566b71fc
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-66bc5c9577-p5nmq                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-no-preload-001983                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-8lrm6                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-no-preload-001983             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-no-preload-001983    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-xpz59                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-no-preload-001983             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node no-preload-001983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node no-preload-001983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node no-preload-001983 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node no-preload-001983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node no-preload-001983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node no-preload-001983 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node no-preload-001983 event: Registered Node no-preload-001983 in Controller
	  Normal  NodeReady                12s                kubelet          Node no-preload-001983 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [be1ff1afb13e0bcd8e66c4355dc304e4f16b9b49eb79d9763f253d12ec305bec] <==
	{"level":"info","ts":"2025-10-26T08:30:02.490405Z","caller":"traceutil/trace.go:172","msg":"trace[890988102] range","detail":"{range_begin:/registry/events/default/no-preload-001983.1871fd4435d164b7; range_end:; response_count:1; response_revision:91; }","duration":"216.309254ms","start":"2025-10-26T08:30:02.274077Z","end":"2025-10-26T08:30:02.490386Z","steps":["trace[890988102] 'agreement among raft nodes before linearized reading'  (duration: 128.079514ms)","trace[890988102] 'range keys from in-memory index tree'  (duration: 87.975878ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:30:02.490395Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"214.690504ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-26T08:30:02.490447Z","caller":"traceutil/trace.go:172","msg":"trace[430884873] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-admin; range_end:; response_count:0; response_revision:92; }","duration":"214.752252ms","start":"2025-10-26T08:30:02.275684Z","end":"2025-10-26T08:30:02.490436Z","steps":["trace[430884873] 'agreement among raft nodes before linearized reading'  (duration: 214.660667ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:30:02.490524Z","caller":"traceutil/trace.go:172","msg":"trace[1563822189] transaction","detail":"{read_only:false; response_revision:92; number_of_response:1; }","duration":"216.662373ms","start":"2025-10-26T08:30:02.273832Z","end":"2025-10-26T08:30:02.490494Z","steps":["trace[1563822189] 'process raft request'  (duration: 128.240422ms)","trace[1563822189] 'compare'  (duration: 88.101926ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:30:02.619672Z","caller":"traceutil/trace.go:172","msg":"trace[1021648048] linearizableReadLoop","detail":"{readStateIndex:97; appliedIndex:97; }","duration":"127.073743ms","start":"2025-10-26T08:30:02.492573Z","end":"2025-10-26T08:30:02.619647Z","steps":["trace[1021648048] 'read index received'  (duration: 127.06484ms)","trace[1021648048] 'applied index is now lower than readState.Index'  (duration: 8.048µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:30:02.620641Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.045554ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/admin\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-26T08:30:02.620685Z","caller":"traceutil/trace.go:172","msg":"trace[1815298863] range","detail":"{range_begin:/registry/clusterroles/admin; range_end:; response_count:0; response_revision:92; }","duration":"128.105495ms","start":"2025-10-26T08:30:02.492568Z","end":"2025-10-26T08:30:02.620673Z","steps":["trace[1815298863] 'agreement among raft nodes before linearized reading'  (duration: 127.180037ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T08:30:02.620784Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"128.167831ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/priorityclasses/system-cluster-critical\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-26T08:30:02.620830Z","caller":"traceutil/trace.go:172","msg":"trace[1278400039] range","detail":"{range_begin:/registry/priorityclasses/system-cluster-critical; range_end:; response_count:0; response_revision:93; }","duration":"128.222513ms","start":"2025-10-26T08:30:02.492597Z","end":"2025-10-26T08:30:02.620820Z","steps":["trace[1278400039] 'agreement among raft nodes before linearized reading'  (duration: 128.147768ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:30:02.620892Z","caller":"traceutil/trace.go:172","msg":"trace[1354096711] transaction","detail":"{read_only:false; response_revision:93; number_of_response:1; }","duration":"128.406008ms","start":"2025-10-26T08:30:02.492465Z","end":"2025-10-26T08:30:02.620871Z","steps":["trace[1354096711] 'process raft request'  (duration: 127.221065ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:30:02.749020Z","caller":"traceutil/trace.go:172","msg":"trace[1636101210] linearizableReadLoop","detail":"{readStateIndex:98; appliedIndex:98; }","duration":"124.795599ms","start":"2025-10-26T08:30:02.624201Z","end":"2025-10-26T08:30:02.748996Z","steps":["trace[1636101210] 'read index received'  (duration: 124.787993ms)","trace[1636101210] 'applied index is now lower than readState.Index'  (duration: 6.485µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:30:02.856792Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"232.566355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/edit\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-26T08:30:02.856850Z","caller":"traceutil/trace.go:172","msg":"trace[1549776720] range","detail":"{range_begin:/registry/clusterroles/edit; range_end:; response_count:0; response_revision:93; }","duration":"232.64623ms","start":"2025-10-26T08:30:02.624191Z","end":"2025-10-26T08:30:02.856837Z","steps":["trace[1549776720] 'agreement among raft nodes before linearized reading'  (duration: 124.891182ms)","trace[1549776720] 'range keys from in-memory index tree'  (duration: 107.645569ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:30:02.856886Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"107.77102ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356216768126220 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/priorityclasses/system-cluster-critical\" mod_revision:0 > success:<request_put:<key:\"/registry/priorityclasses/system-cluster-critical\" value_size:407 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-10-26T08:30:02.856943Z","caller":"traceutil/trace.go:172","msg":"trace[1197153067] transaction","detail":"{read_only:false; response_revision:94; number_of_response:1; }","duration":"233.792969ms","start":"2025-10-26T08:30:02.623141Z","end":"2025-10-26T08:30:02.856934Z","steps":["trace[1197153067] 'process raft request'  (duration: 125.925867ms)","trace[1197153067] 'compare'  (duration: 107.670625ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:30:02.859094Z","caller":"traceutil/trace.go:172","msg":"trace[491301793] transaction","detail":"{read_only:false; response_revision:95; number_of_response:1; }","duration":"234.608178ms","start":"2025-10-26T08:30:02.624474Z","end":"2025-10-26T08:30:02.859082Z","steps":["trace[491301793] 'process raft request'  (duration: 234.535792ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:30:03.073136Z","caller":"traceutil/trace.go:172","msg":"trace[814433535] transaction","detail":"{read_only:false; response_revision:101; number_of_response:1; }","duration":"201.148972ms","start":"2025-10-26T08:30:02.871967Z","end":"2025-10-26T08:30:03.073116Z","steps":["trace[814433535] 'process raft request'  (duration: 201.099957ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:30:03.073168Z","caller":"traceutil/trace.go:172","msg":"trace[1184691182] transaction","detail":"{read_only:false; response_revision:100; number_of_response:1; }","duration":"201.375811ms","start":"2025-10-26T08:30:02.871772Z","end":"2025-10-26T08:30:03.073148Z","steps":["trace[1184691182] 'process raft request'  (duration: 122.051058ms)","trace[1184691182] 'compare'  (duration: 79.104046ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:30:03.262565Z","caller":"traceutil/trace.go:172","msg":"trace[407926487] transaction","detail":"{read_only:false; response_revision:103; number_of_response:1; }","duration":"185.447077ms","start":"2025-10-26T08:30:03.077098Z","end":"2025-10-26T08:30:03.262545Z","steps":["trace[407926487] 'process raft request'  (duration: 185.405992ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:30:03.262604Z","caller":"traceutil/trace.go:172","msg":"trace[1159151195] transaction","detail":"{read_only:false; response_revision:102; number_of_response:1; }","duration":"186.576116ms","start":"2025-10-26T08:30:03.076006Z","end":"2025-10-26T08:30:03.262582Z","steps":["trace[1159151195] 'process raft request'  (duration: 126.640237ms)","trace[1159151195] 'compare'  (duration: 59.746276ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:30:03.552587Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"227.539549ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-10-26T08:30:03.552646Z","caller":"traceutil/trace.go:172","msg":"trace[286723376] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:105; }","duration":"227.61326ms","start":"2025-10-26T08:30:03.325019Z","end":"2025-10-26T08:30:03.552632Z","steps":["trace[286723376] 'agreement among raft nodes before linearized reading'  (duration: 78.027261ms)","trace[286723376] 'range keys from in-memory index tree'  (duration: 149.478031ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:30:03.552686Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"149.538195ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356216768126241 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/kube-controller-manager-no-preload-001983.1871fd445617d1bc\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/kube-controller-manager-no-preload-001983.1871fd445617d1bc\" value_size:713 lease:6414984179913350339 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-10-26T08:30:03.552830Z","caller":"traceutil/trace.go:172","msg":"trace[1389560277] transaction","detail":"{read_only:false; response_revision:107; number_of_response:1; }","duration":"276.867325ms","start":"2025-10-26T08:30:03.275952Z","end":"2025-10-26T08:30:03.552819Z","steps":["trace[1389560277] 'process raft request'  (duration: 276.812057ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:30:03.552863Z","caller":"traceutil/trace.go:172","msg":"trace[1343446988] transaction","detail":"{read_only:false; response_revision:106; number_of_response:1; }","duration":"278.156528ms","start":"2025-10-26T08:30:03.274694Z","end":"2025-10-26T08:30:03.552850Z","steps":["trace[1343446988] 'process raft request'  (duration: 128.40854ms)","trace[1343446988] 'compare'  (duration: 149.430571ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:30:34 up  1:13,  0 user,  load average: 4.48, 3.24, 1.97
	Linux no-preload-001983 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [58c44114f0a1708e6fc3d3f4baca5da971f256551efc345fc865ab14af9d202b] <==
	I1026 08:30:12.038082       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:30:12.038404       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 08:30:12.038553       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:30:12.038572       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:30:12.038601       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:30:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:30:12.241835       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:30:12.241868       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:30:12.241879       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:30:12.242028       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:30:12.542423       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:30:12.542448       1 metrics.go:72] Registering metrics
	I1026 08:30:12.542496       1 controller.go:711] "Syncing nftables rules"
	I1026 08:30:22.242525       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 08:30:22.242652       1 main.go:301] handling current node
	I1026 08:30:32.244484       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 08:30:32.244573       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ffc27de2ebd9dc07705132b4c07139d04107c98c32d17a3a38ab1fa2b6681628] <==
	I1026 08:30:01.218818       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:30:01.218885       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 08:30:01.223594       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 08:30:01.227483       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:30:01.227830       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:30:01.564349       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:30:02.491667       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 08:30:02.857740       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 08:30:02.857758       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:30:04.052744       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:30:04.098723       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:30:04.176151       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:30:04.231834       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 08:30:04.238188       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1026 08:30:04.239455       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:30:04.243850       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:30:05.136709       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:30:05.148110       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 08:30:05.158518       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 08:30:09.879645       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 08:30:09.879648       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 08:30:10.030772       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:30:10.035341       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:30:10.279228       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1026 08:30:33.277943       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8443->192.168.76.1:51078: use of closed network connection
	
	
	==> kube-controller-manager [704c749866a63ff2c632142413401acbcc1020b5c2590ebf652d468d2003ab3c] <==
	I1026 08:30:09.175680       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 08:30:09.175701       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 08:30:09.175724       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 08:30:09.175831       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 08:30:09.175856       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 08:30:09.175927       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 08:30:09.176914       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 08:30:09.176938       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 08:30:09.176956       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 08:30:09.177295       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 08:30:09.177320       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 08:30:09.178404       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 08:30:09.178463       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 08:30:09.178887       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 08:30:09.181152       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:30:09.181181       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 08:30:09.181290       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1026 08:30:09.181337       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1026 08:30:09.181394       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1026 08:30:09.181405       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1026 08:30:09.181410       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1026 08:30:09.182481       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 08:30:09.188618       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-001983" podCIDRs=["10.244.0.0/24"]
	I1026 08:30:09.201972       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:30:24.127019       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [dcdd517d9274abfc26fe35ebc9ff300082b5375c000198d7a4beb583f9e5c1ef] <==
	I1026 08:30:10.295400       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:30:10.353363       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:30:10.454324       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:30:10.454361       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 08:30:10.454459       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:30:10.477798       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:30:10.477852       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:30:10.484489       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:30:10.484913       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:30:10.484933       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:30:10.487273       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:30:10.487343       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:30:10.487327       1 config.go:200] "Starting service config controller"
	I1026 08:30:10.487383       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:30:10.487392       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:30:10.487393       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:30:10.487408       1 config.go:309] "Starting node config controller"
	I1026 08:30:10.487419       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:30:10.487428       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:30:10.588336       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:30:10.588365       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 08:30:10.588415       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6270e8690acced7484fe3b9ef070aa894981033c2d9d719224e417b44f736095] <==
	E1026 08:30:01.184904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:30:01.184913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:30:01.184957       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:30:01.185060       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:30:01.185108       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 08:30:01.999796       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:30:02.012190       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 08:30:02.110176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 08:30:02.155911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 08:30:02.162479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:30:02.242124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:30:02.266545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:30:02.273426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:30:02.284884       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 08:30:02.312243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 08:30:02.323561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 08:30:02.340906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 08:30:02.345309       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:30:02.376944       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:30:02.424318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:30:02.491453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:30:02.597041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:30:02.604405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:30:02.655754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1026 08:30:04.471229       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:30:06 no-preload-001983 kubelet[2310]: I1026 08:30:06.035041    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-001983" podStartSLOduration=1.035016101 podStartE2EDuration="1.035016101s" podCreationTimestamp="2025-10-26 08:30:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:06.017538442 +0000 UTC m=+1.130466677" watchObservedRunningTime="2025-10-26 08:30:06.035016101 +0000 UTC m=+1.147944338"
	Oct 26 08:30:06 no-preload-001983 kubelet[2310]: I1026 08:30:06.035187    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-001983" podStartSLOduration=2.035175341 podStartE2EDuration="2.035175341s" podCreationTimestamp="2025-10-26 08:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:06.035150875 +0000 UTC m=+1.148079113" watchObservedRunningTime="2025-10-26 08:30:06.035175341 +0000 UTC m=+1.148103577"
	Oct 26 08:30:06 no-preload-001983 kubelet[2310]: I1026 08:30:06.054549    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-no-preload-001983" podStartSLOduration=2.054526719 podStartE2EDuration="2.054526719s" podCreationTimestamp="2025-10-26 08:30:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:06.045553282 +0000 UTC m=+1.158481528" watchObservedRunningTime="2025-10-26 08:30:06.054526719 +0000 UTC m=+1.167454950"
	Oct 26 08:30:06 no-preload-001983 kubelet[2310]: I1026 08:30:06.054781    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-001983" podStartSLOduration=3.054771731 podStartE2EDuration="3.054771731s" podCreationTimestamp="2025-10-26 08:30:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:06.054671919 +0000 UTC m=+1.167600157" watchObservedRunningTime="2025-10-26 08:30:06.054771731 +0000 UTC m=+1.167699969"
	Oct 26 08:30:09 no-preload-001983 kubelet[2310]: I1026 08:30:09.198485    2310 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 08:30:09 no-preload-001983 kubelet[2310]: I1026 08:30:09.199196    2310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 08:30:09 no-preload-001983 kubelet[2310]: I1026 08:30:09.995666    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c7993ca-1a79-4128-8863-3a16d46c0f8d-xtables-lock\") pod \"kube-proxy-xpz59\" (UID: \"0c7993ca-1a79-4128-8863-3a16d46c0f8d\") " pod="kube-system/kube-proxy-xpz59"
	Oct 26 08:30:09 no-preload-001983 kubelet[2310]: I1026 08:30:09.995747    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c7993ca-1a79-4128-8863-3a16d46c0f8d-lib-modules\") pod \"kube-proxy-xpz59\" (UID: \"0c7993ca-1a79-4128-8863-3a16d46c0f8d\") " pod="kube-system/kube-proxy-xpz59"
	Oct 26 08:30:09 no-preload-001983 kubelet[2310]: I1026 08:30:09.995791    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8f793c9d-8d06-4fd2-a937-fe2736ff2c5a-xtables-lock\") pod \"kindnet-8lrm6\" (UID: \"8f793c9d-8d06-4fd2-a937-fe2736ff2c5a\") " pod="kube-system/kindnet-8lrm6"
	Oct 26 08:30:09 no-preload-001983 kubelet[2310]: I1026 08:30:09.995814    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf9qz\" (UniqueName: \"kubernetes.io/projected/8f793c9d-8d06-4fd2-a937-fe2736ff2c5a-kube-api-access-gf9qz\") pod \"kindnet-8lrm6\" (UID: \"8f793c9d-8d06-4fd2-a937-fe2736ff2c5a\") " pod="kube-system/kindnet-8lrm6"
	Oct 26 08:30:09 no-preload-001983 kubelet[2310]: I1026 08:30:09.995832    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0c7993ca-1a79-4128-8863-3a16d46c0f8d-kube-proxy\") pod \"kube-proxy-xpz59\" (UID: \"0c7993ca-1a79-4128-8863-3a16d46c0f8d\") " pod="kube-system/kube-proxy-xpz59"
	Oct 26 08:30:09 no-preload-001983 kubelet[2310]: I1026 08:30:09.995869    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8f793c9d-8d06-4fd2-a937-fe2736ff2c5a-cni-cfg\") pod \"kindnet-8lrm6\" (UID: \"8f793c9d-8d06-4fd2-a937-fe2736ff2c5a\") " pod="kube-system/kindnet-8lrm6"
	Oct 26 08:30:09 no-preload-001983 kubelet[2310]: I1026 08:30:09.995890    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8f793c9d-8d06-4fd2-a937-fe2736ff2c5a-lib-modules\") pod \"kindnet-8lrm6\" (UID: \"8f793c9d-8d06-4fd2-a937-fe2736ff2c5a\") " pod="kube-system/kindnet-8lrm6"
	Oct 26 08:30:09 no-preload-001983 kubelet[2310]: I1026 08:30:09.995919    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87clh\" (UniqueName: \"kubernetes.io/projected/0c7993ca-1a79-4128-8863-3a16d46c0f8d-kube-api-access-87clh\") pod \"kube-proxy-xpz59\" (UID: \"0c7993ca-1a79-4128-8863-3a16d46c0f8d\") " pod="kube-system/kube-proxy-xpz59"
	Oct 26 08:30:11 no-preload-001983 kubelet[2310]: I1026 08:30:11.801802    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xpz59" podStartSLOduration=2.8017589960000002 podStartE2EDuration="2.801758996s" podCreationTimestamp="2025-10-26 08:30:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:11.030563312 +0000 UTC m=+6.143491550" watchObservedRunningTime="2025-10-26 08:30:11.801758996 +0000 UTC m=+6.914687270"
	Oct 26 08:30:12 no-preload-001983 kubelet[2310]: I1026 08:30:12.026641    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-8lrm6" podStartSLOduration=1.42914195 podStartE2EDuration="3.026621146s" podCreationTimestamp="2025-10-26 08:30:09 +0000 UTC" firstStartedPulling="2025-10-26 08:30:10.207701619 +0000 UTC m=+5.320629849" lastFinishedPulling="2025-10-26 08:30:11.805180828 +0000 UTC m=+6.918109045" observedRunningTime="2025-10-26 08:30:12.026547296 +0000 UTC m=+7.139475535" watchObservedRunningTime="2025-10-26 08:30:12.026621146 +0000 UTC m=+7.139549384"
	Oct 26 08:30:22 no-preload-001983 kubelet[2310]: I1026 08:30:22.793333    2310 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 08:30:22 no-preload-001983 kubelet[2310]: I1026 08:30:22.891536    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ab93365-e465-4f64-aed0-d44be160f82d-config-volume\") pod \"coredns-66bc5c9577-p5nmq\" (UID: \"9ab93365-e465-4f64-aed0-d44be160f82d\") " pod="kube-system/coredns-66bc5c9577-p5nmq"
	Oct 26 08:30:22 no-preload-001983 kubelet[2310]: I1026 08:30:22.891598    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7hgx\" (UniqueName: \"kubernetes.io/projected/9ab93365-e465-4f64-aed0-d44be160f82d-kube-api-access-n7hgx\") pod \"coredns-66bc5c9577-p5nmq\" (UID: \"9ab93365-e465-4f64-aed0-d44be160f82d\") " pod="kube-system/coredns-66bc5c9577-p5nmq"
	Oct 26 08:30:22 no-preload-001983 kubelet[2310]: I1026 08:30:22.891656    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/23d54628-ab9a-49f0-bd02-fdf50b08c93e-tmp\") pod \"storage-provisioner\" (UID: \"23d54628-ab9a-49f0-bd02-fdf50b08c93e\") " pod="kube-system/storage-provisioner"
	Oct 26 08:30:22 no-preload-001983 kubelet[2310]: I1026 08:30:22.891682    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g64m2\" (UniqueName: \"kubernetes.io/projected/23d54628-ab9a-49f0-bd02-fdf50b08c93e-kube-api-access-g64m2\") pod \"storage-provisioner\" (UID: \"23d54628-ab9a-49f0-bd02-fdf50b08c93e\") " pod="kube-system/storage-provisioner"
	Oct 26 08:30:24 no-preload-001983 kubelet[2310]: I1026 08:30:24.064060    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p5nmq" podStartSLOduration=14.06404005 podStartE2EDuration="14.06404005s" podCreationTimestamp="2025-10-26 08:30:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:24.063838125 +0000 UTC m=+19.176766363" watchObservedRunningTime="2025-10-26 08:30:24.06404005 +0000 UTC m=+19.176968287"
	Oct 26 08:30:26 no-preload-001983 kubelet[2310]: I1026 08:30:26.129984    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.129958031 podStartE2EDuration="15.129958031s" podCreationTimestamp="2025-10-26 08:30:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:24.089451894 +0000 UTC m=+19.202380132" watchObservedRunningTime="2025-10-26 08:30:26.129958031 +0000 UTC m=+21.242886266"
	Oct 26 08:30:26 no-preload-001983 kubelet[2310]: I1026 08:30:26.213907    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79cl8\" (UniqueName: \"kubernetes.io/projected/3eb3e11d-988f-48b0-a678-67f786b283c9-kube-api-access-79cl8\") pod \"busybox\" (UID: \"3eb3e11d-988f-48b0-a678-67f786b283c9\") " pod="default/busybox"
	Oct 26 08:30:28 no-preload-001983 kubelet[2310]: I1026 08:30:28.073403    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.760086116 podStartE2EDuration="2.073383432s" podCreationTimestamp="2025-10-26 08:30:26 +0000 UTC" firstStartedPulling="2025-10-26 08:30:26.456393242 +0000 UTC m=+21.569321471" lastFinishedPulling="2025-10-26 08:30:27.769690561 +0000 UTC m=+22.882618787" observedRunningTime="2025-10-26 08:30:28.073143019 +0000 UTC m=+23.186071257" watchObservedRunningTime="2025-10-26 08:30:28.073383432 +0000 UTC m=+23.186311670"
	
	
	==> storage-provisioner [2748d4150d130c9ee0d58930663116e37f6d07b9ee835663332f94818d13d28f] <==
	I1026 08:30:23.186004       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 08:30:23.196176       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 08:30:23.196238       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 08:30:23.198922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:23.204393       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:30:23.204614       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 08:30:23.204851       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-001983_b24288b5-e4a9-4c24-b5f6-4220aca24c3a!
	I1026 08:30:23.204793       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b8f114b-3680-4914-b270-3b66442ba435", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-001983_b24288b5-e4a9-4c24-b5f6-4220aca24c3a became leader
	W1026 08:30:23.207669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:23.214952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:30:23.305854       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-001983_b24288b5-e4a9-4c24-b5f6-4220aca24c3a!
	W1026 08:30:25.219441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:25.223823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:27.227114       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:27.233225       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:29.235975       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:29.240214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:31.243059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:31.251206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:33.255447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:33.261633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-001983 -n no-preload-001983
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-001983 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-752315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-752315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (300.197477ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:30:46Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-752315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-752315 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-752315 describe deploy/metrics-server -n kube-system: exit status 1 (76.966793ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-752315 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-752315
helpers_test.go:243: (dbg) docker inspect embed-certs-752315:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215",
	        "Created": "2025-10-26T08:30:03.656841768Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 244677,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:30:03.695533137Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215/hostname",
	        "HostsPath": "/var/lib/docker/containers/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215/hosts",
	        "LogPath": "/var/lib/docker/containers/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215-json.log",
	        "Name": "/embed-certs-752315",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-752315:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-752315",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215",
	                "LowerDir": "/var/lib/docker/overlay2/6845dbb109d8d0c47760eee1a1982a045182bb149bbb770f01a93faa904cde6f-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6845dbb109d8d0c47760eee1a1982a045182bb149bbb770f01a93faa904cde6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6845dbb109d8d0c47760eee1a1982a045182bb149bbb770f01a93faa904cde6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6845dbb109d8d0c47760eee1a1982a045182bb149bbb770f01a93faa904cde6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-752315",
	                "Source": "/var/lib/docker/volumes/embed-certs-752315/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-752315",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-752315",
	                "name.minikube.sigs.k8s.io": "embed-certs-752315",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5e8440bb30fcfccc18999a76742b723b25394158b242446fa4ee7179eb9917bb",
	            "SandboxKey": "/var/run/docker/netns/5e8440bb30fc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33063"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33064"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33067"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33065"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33066"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-752315": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "de:b2:dd:27:30:e8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d5aa8ca4605176daf87c9c9f24c1c35f5c6618444861770e8529506402674500",
	                    "EndpointID": "34137697fb2d183b7c54b54c6a56ca84db4137e35485e2034f0f49a85eca170f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-752315",
	                        "8eca8953ad72"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-752315 -n embed-certs-752315
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-752315 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-752315 logs -n 25: (1.124604556s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-110992 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ ssh     │ -p cilium-110992 sudo crio config                                                                                                                                                                                                             │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p cilium-110992                                                                                                                                                                                                                              │ cilium-110992          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ delete  │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p cert-expiration-535689 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-535689 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ ssh     │ -p NoKubernetes-815548 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p cert-expiration-535689                                                                                                                                                                                                                     │ cert-expiration-535689 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:30 UTC │
	│ stop    │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ ssh     │ -p NoKubernetes-815548 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-810379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p old-k8s-version-810379 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-810379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-001983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p no-preload-001983 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-752315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:30:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:30:23.328491  249498 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:30:23.328596  249498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:30:23.328608  249498 out.go:374] Setting ErrFile to fd 2...
	I1026 08:30:23.328614  249498 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:30:23.328796  249498 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:30:23.329281  249498 out.go:368] Setting JSON to false
	I1026 08:30:23.330740  249498 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4374,"bootTime":1761463049,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:30:23.330798  249498 start.go:141] virtualization: kvm guest
	I1026 08:30:23.333651  249498 out.go:179] * [old-k8s-version-810379] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:30:23.334914  249498 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:30:23.334948  249498 notify.go:220] Checking for updates...
	I1026 08:30:23.337301  249498 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:30:23.338523  249498 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:30:23.339746  249498 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:30:23.340905  249498 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:30:23.341950  249498 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:30:23.343404  249498 config.go:182] Loaded profile config "old-k8s-version-810379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 08:30:23.344884  249498 out.go:179] * Kubernetes 1.34.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.34.1
	I1026 08:30:23.345823  249498 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:30:23.371070  249498 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:30:23.371157  249498 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:30:23.429744  249498 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:30:23.419592818 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:30:23.429851  249498 docker.go:318] overlay module found
	I1026 08:30:23.431373  249498 out.go:179] * Using the docker driver based on existing profile
	I1026 08:30:23.432333  249498 start.go:305] selected driver: docker
	I1026 08:30:23.432354  249498 start.go:925] validating driver "docker" against &{Name:old-k8s-version-810379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-810379 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mo
untType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:30:23.432463  249498 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:30:23.433287  249498 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:30:23.490841  249498 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:30:23.481164634 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:30:23.491111  249498 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:30:23.491149  249498 cni.go:84] Creating CNI manager for ""
	I1026 08:30:23.491194  249498 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:30:23.491229  249498 start.go:349] cluster config:
	{Name:old-k8s-version-810379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-810379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:30:23.493904  249498 out.go:179] * Starting "old-k8s-version-810379" primary control-plane node in "old-k8s-version-810379" cluster
	I1026 08:30:23.495041  249498 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:30:23.496069  249498 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:30:23.497328  249498 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 08:30:23.497377  249498 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I1026 08:30:23.497387  249498 cache.go:58] Caching tarball of preloaded images
	I1026 08:30:23.497416  249498 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:30:23.497474  249498 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:30:23.497489  249498 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I1026 08:30:23.497596  249498 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/config.json ...
	I1026 08:30:23.528650  249498 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:30:23.528672  249498 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:30:23.528695  249498 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:30:23.528726  249498 start.go:360] acquireMachinesLock for old-k8s-version-810379: {Name:mk1dce12657c26f87987fe3adf5e57eecaf35c8f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:30:23.528818  249498 start.go:364] duration metric: took 68.534µs to acquireMachinesLock for "old-k8s-version-810379"
	I1026 08:30:23.528837  249498 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:30:23.528846  249498 fix.go:54] fixHost starting: 
	I1026 08:30:23.529136  249498 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:30:23.549775  249498 fix.go:112] recreateIfNeeded on old-k8s-version-810379: state=Stopped err=<nil>
	W1026 08:30:23.549806  249498 fix.go:138] unexpected machine state, will restart: <nil>
	W1026 08:30:20.801406  237215 node_ready.go:57] node "no-preload-001983" has "Ready":"False" status (will retry)
	I1026 08:30:22.801665  237215 node_ready.go:49] node "no-preload-001983" is "Ready"
	I1026 08:30:22.801697  237215 node_ready.go:38] duration metric: took 11.503293023s for node "no-preload-001983" to be "Ready" ...
	I1026 08:30:22.801715  237215 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:30:22.801773  237215 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:30:22.817820  237215 api_server.go:72] duration metric: took 12.137927509s to wait for apiserver process to appear ...
	I1026 08:30:22.817852  237215 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:30:22.817878  237215 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 08:30:22.823463  237215 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 08:30:22.824503  237215 api_server.go:141] control plane version: v1.34.1
	I1026 08:30:22.824533  237215 api_server.go:131] duration metric: took 6.672822ms to wait for apiserver health ...
	I1026 08:30:22.824544  237215 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:30:22.828881  237215 system_pods.go:59] 8 kube-system pods found
	I1026 08:30:22.828912  237215 system_pods.go:61] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Pending
	I1026 08:30:22.828921  237215 system_pods.go:61] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running
	I1026 08:30:22.828926  237215 system_pods.go:61] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:30:22.828932  237215 system_pods.go:61] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running
	I1026 08:30:22.828939  237215 system_pods.go:61] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running
	I1026 08:30:22.828943  237215 system_pods.go:61] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:30:22.828948  237215 system_pods.go:61] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running
	I1026 08:30:22.828952  237215 system_pods.go:61] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Pending
	I1026 08:30:22.828960  237215 system_pods.go:74] duration metric: took 4.409352ms to wait for pod list to return data ...
	I1026 08:30:22.828974  237215 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:30:22.831324  237215 default_sa.go:45] found service account: "default"
	I1026 08:30:22.831346  237215 default_sa.go:55] duration metric: took 2.365342ms for default service account to be created ...
	I1026 08:30:22.831357  237215 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:30:22.833797  237215 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:22.833822  237215 system_pods.go:89] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Pending
	I1026 08:30:22.833829  237215 system_pods.go:89] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running
	I1026 08:30:22.833834  237215 system_pods.go:89] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:30:22.833839  237215 system_pods.go:89] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running
	I1026 08:30:22.833846  237215 system_pods.go:89] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running
	I1026 08:30:22.833851  237215 system_pods.go:89] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:30:22.833856  237215 system_pods.go:89] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running
	I1026 08:30:22.833870  237215 system_pods.go:89] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:30:22.833896  237215 retry.go:31] will retry after 283.70781ms: missing components: kube-dns
	I1026 08:30:23.121760  237215 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:23.121803  237215 system_pods.go:89] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:30:23.121817  237215 system_pods.go:89] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running
	I1026 08:30:23.121826  237215 system_pods.go:89] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:30:23.121831  237215 system_pods.go:89] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running
	I1026 08:30:23.121837  237215 system_pods.go:89] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running
	I1026 08:30:23.121842  237215 system_pods.go:89] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:30:23.121847  237215 system_pods.go:89] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running
	I1026 08:30:23.121855  237215 system_pods.go:89] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:30:23.121873  237215 retry.go:31] will retry after 280.845246ms: missing components: kube-dns
	I1026 08:30:23.407603  237215 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:23.407644  237215 system_pods.go:89] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:30:23.407653  237215 system_pods.go:89] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running
	I1026 08:30:23.407661  237215 system_pods.go:89] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:30:23.407667  237215 system_pods.go:89] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running
	I1026 08:30:23.407673  237215 system_pods.go:89] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running
	I1026 08:30:23.407678  237215 system_pods.go:89] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:30:23.407683  237215 system_pods.go:89] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running
	I1026 08:30:23.407690  237215 system_pods.go:89] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:30:23.407709  237215 retry.go:31] will retry after 417.039624ms: missing components: kube-dns
	I1026 08:30:23.829091  237215 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:23.829125  237215 system_pods.go:89] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:30:23.829137  237215 system_pods.go:89] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running
	I1026 08:30:23.829144  237215 system_pods.go:89] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:30:23.829150  237215 system_pods.go:89] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running
	I1026 08:30:23.829159  237215 system_pods.go:89] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running
	I1026 08:30:23.829164  237215 system_pods.go:89] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:30:23.829173  237215 system_pods.go:89] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running
	I1026 08:30:23.829181  237215 system_pods.go:89] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:30:23.829202  237215 retry.go:31] will retry after 468.653678ms: missing components: kube-dns
	I1026 08:30:24.302584  237215 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:24.302613  237215 system_pods.go:89] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Running
	I1026 08:30:24.302621  237215 system_pods.go:89] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running
	I1026 08:30:24.302626  237215 system_pods.go:89] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:30:24.302640  237215 system_pods.go:89] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running
	I1026 08:30:24.302650  237215 system_pods.go:89] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running
	I1026 08:30:24.302655  237215 system_pods.go:89] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:30:24.302663  237215 system_pods.go:89] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running
	I1026 08:30:24.302669  237215 system_pods.go:89] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Running
	I1026 08:30:24.302681  237215 system_pods.go:126] duration metric: took 1.471317676s to wait for k8s-apps to be running ...
	I1026 08:30:24.302694  237215 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:30:24.302747  237215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:30:24.318011  237215 system_svc.go:56] duration metric: took 15.30736ms WaitForService to wait for kubelet
	I1026 08:30:24.318044  237215 kubeadm.go:586] duration metric: took 13.638159383s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:30:24.318067  237215 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:30:24.322426  237215 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:30:24.322460  237215 node_conditions.go:123] node cpu capacity is 8
	I1026 08:30:24.322476  237215 node_conditions.go:105] duration metric: took 4.402583ms to run NodePressure ...
	I1026 08:30:24.322490  237215 start.go:241] waiting for startup goroutines ...
	I1026 08:30:24.322500  237215 start.go:246] waiting for cluster config update ...
	I1026 08:30:24.322514  237215 start.go:255] writing updated cluster config ...
	I1026 08:30:24.322837  237215 ssh_runner.go:195] Run: rm -f paused
	I1026 08:30:24.328840  237215 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:30:24.334466  237215 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p5nmq" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.340961  237215 pod_ready.go:94] pod "coredns-66bc5c9577-p5nmq" is "Ready"
	I1026 08:30:24.340989  237215 pod_ready.go:86] duration metric: took 6.492282ms for pod "coredns-66bc5c9577-p5nmq" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.343647  237215 pod_ready.go:83] waiting for pod "etcd-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.360165  237215 pod_ready.go:94] pod "etcd-no-preload-001983" is "Ready"
	I1026 08:30:24.360193  237215 pod_ready.go:86] duration metric: took 16.521025ms for pod "etcd-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.367589  237215 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.375428  237215 pod_ready.go:94] pod "kube-apiserver-no-preload-001983" is "Ready"
	I1026 08:30:24.375454  237215 pod_ready.go:86] duration metric: took 7.834025ms for pod "kube-apiserver-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.377919  237215 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:23.209500  243672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:30:23.708939  243672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:30:24.209116  243672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:30:24.709497  243672 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:30:24.782049  243672 kubeadm.go:1113] duration metric: took 5.170440878s to wait for elevateKubeSystemPrivileges
	I1026 08:30:24.782084  243672 kubeadm.go:402] duration metric: took 16.146586455s to StartCluster
	I1026 08:30:24.782102  243672 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:30:24.782173  243672 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:30:24.783886  243672 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:30:24.784136  243672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 08:30:24.784149  243672 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:30:24.784204  243672 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-752315"
	I1026 08:30:24.784216  243672 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-752315"
	I1026 08:30:24.784234  243672 addons.go:69] Setting default-storageclass=true in profile "embed-certs-752315"
	I1026 08:30:24.784134  243672 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:30:24.784236  243672 host.go:66] Checking if "embed-certs-752315" exists ...
	I1026 08:30:24.784299  243672 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-752315"
	I1026 08:30:24.784383  243672 config.go:182] Loaded profile config "embed-certs-752315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:30:24.784837  243672 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:30:24.784938  243672 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:30:24.785990  243672 out.go:179] * Verifying Kubernetes components...
	I1026 08:30:24.790839  243672 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:30:24.809086  243672 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:30:24.810291  243672 addons.go:238] Setting addon default-storageclass=true in "embed-certs-752315"
	I1026 08:30:24.810343  243672 host.go:66] Checking if "embed-certs-752315" exists ...
	I1026 08:30:24.810444  243672 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:30:24.810476  243672 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:30:24.810560  243672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:30:24.810821  243672 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:30:24.842285  243672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:30:24.852371  243672 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:30:24.852396  243672 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:30:24.852460  243672 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:30:24.877819  243672 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33063 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:30:24.887561  243672 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 08:30:24.962140  243672 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:30:24.969198  243672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:30:24.992735  243672 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:30:25.074632  243672 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1026 08:30:25.075877  243672 node_ready.go:35] waiting up to 6m0s for node "embed-certs-752315" to be "Ready" ...
	I1026 08:30:25.274726  243672 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 08:30:24.733632  237215 pod_ready.go:94] pod "kube-controller-manager-no-preload-001983" is "Ready"
	I1026 08:30:24.733675  237215 pod_ready.go:86] duration metric: took 355.730033ms for pod "kube-controller-manager-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:24.933591  237215 pod_ready.go:83] waiting for pod "kube-proxy-xpz59" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:25.333116  237215 pod_ready.go:94] pod "kube-proxy-xpz59" is "Ready"
	I1026 08:30:25.333146  237215 pod_ready.go:86] duration metric: took 399.525389ms for pod "kube-proxy-xpz59" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:25.533642  237215 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:25.932924  237215 pod_ready.go:94] pod "kube-scheduler-no-preload-001983" is "Ready"
	I1026 08:30:25.932951  237215 pod_ready.go:86] duration metric: took 399.286366ms for pod "kube-scheduler-no-preload-001983" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:25.932963  237215 pod_ready.go:40] duration metric: took 1.60408757s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:30:25.974939  237215 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:30:25.976721  237215 out.go:179] * Done! kubectl is now configured to use "no-preload-001983" cluster and "default" namespace by default
	I1026 08:30:23.930322  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:30:23.930800  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:30:23.930856  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:30:23.930913  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:30:23.962068  204716 cri.go:89] found id: "20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:23.962100  204716 cri.go:89] found id: ""
	I1026 08:30:23.962109  204716 logs.go:282] 1 containers: [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca]
	I1026 08:30:23.962168  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:23.968002  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:30:23.968082  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:30:23.994978  204716 cri.go:89] found id: ""
	I1026 08:30:23.995003  204716 logs.go:282] 0 containers: []
	W1026 08:30:23.995013  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:30:23.995019  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:30:23.995097  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:30:24.024197  204716 cri.go:89] found id: ""
	I1026 08:30:24.024225  204716 logs.go:282] 0 containers: []
	W1026 08:30:24.024236  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:30:24.024243  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:30:24.024334  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:30:24.058660  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:24.058695  204716 cri.go:89] found id: ""
	I1026 08:30:24.058705  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:30:24.058772  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:24.063970  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:30:24.064040  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:30:24.098462  204716 cri.go:89] found id: ""
	I1026 08:30:24.098510  204716 logs.go:282] 0 containers: []
	W1026 08:30:24.098522  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:30:24.098542  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:30:24.098607  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:30:24.134049  204716 cri.go:89] found id: "ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:24.134075  204716 cri.go:89] found id: ""
	I1026 08:30:24.134084  204716 logs.go:282] 1 containers: [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03]
	I1026 08:30:24.134143  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:24.139649  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:30:24.139729  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:30:24.171850  204716 cri.go:89] found id: ""
	I1026 08:30:24.171878  204716 logs.go:282] 0 containers: []
	W1026 08:30:24.171888  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:30:24.171895  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:30:24.171946  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:30:24.200191  204716 cri.go:89] found id: ""
	I1026 08:30:24.200220  204716 logs.go:282] 0 containers: []
	W1026 08:30:24.200231  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:30:24.200241  204716 logs.go:123] Gathering logs for kube-apiserver [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca] ...
	I1026 08:30:24.200275  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:24.241266  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:30:24.241309  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:24.298376  204716 logs.go:123] Gathering logs for kube-controller-manager [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03] ...
	I1026 08:30:24.298415  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:24.333522  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:30:24.333552  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:30:24.396435  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:30:24.396466  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:30:24.429267  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:30:24.429296  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:30:24.530643  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:30:24.530676  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:30:24.545458  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:30:24.545482  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:30:24.612228  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:30:25.276047  243672 addons.go:514] duration metric: took 491.893443ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 08:30:25.578814  243672 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-752315" context rescaled to 1 replicas
	W1026 08:30:27.078919  243672 node_ready.go:57] node "embed-certs-752315" has "Ready":"False" status (will retry)
	I1026 08:30:23.551285  249498 out.go:252] * Restarting existing docker container for "old-k8s-version-810379" ...
	I1026 08:30:23.551364  249498 cli_runner.go:164] Run: docker start old-k8s-version-810379
	I1026 08:30:23.808611  249498 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:30:23.828677  249498 kic.go:430] container "old-k8s-version-810379" state is running.
	I1026 08:30:23.829671  249498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-810379
	I1026 08:30:23.849639  249498 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/config.json ...
	I1026 08:30:23.849935  249498 machine.go:93] provisionDockerMachine start ...
	I1026 08:30:23.850010  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:23.869444  249498 main.go:141] libmachine: Using SSH client type: native
	I1026 08:30:23.869757  249498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1026 08:30:23.869774  249498 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:30:23.870362  249498 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58306->127.0.0.1:33068: read: connection reset by peer
	I1026 08:30:27.013393  249498 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-810379
	
	I1026 08:30:27.013420  249498 ubuntu.go:182] provisioning hostname "old-k8s-version-810379"
	I1026 08:30:27.013482  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:27.031971  249498 main.go:141] libmachine: Using SSH client type: native
	I1026 08:30:27.032182  249498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1026 08:30:27.032199  249498 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-810379 && echo "old-k8s-version-810379" | sudo tee /etc/hostname
	I1026 08:30:27.185017  249498 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-810379
	
	I1026 08:30:27.185109  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:27.205401  249498 main.go:141] libmachine: Using SSH client type: native
	I1026 08:30:27.205646  249498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1026 08:30:27.205666  249498 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-810379' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-810379/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-810379' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:30:27.353363  249498 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:30:27.353394  249498 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:30:27.353447  249498 ubuntu.go:190] setting up certificates
	I1026 08:30:27.353469  249498 provision.go:84] configureAuth start
	I1026 08:30:27.353549  249498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-810379
	I1026 08:30:27.372870  249498 provision.go:143] copyHostCerts
	I1026 08:30:27.372948  249498 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:30:27.372969  249498 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:30:27.373067  249498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:30:27.373276  249498 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:30:27.373292  249498 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:30:27.373349  249498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:30:27.373467  249498 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:30:27.373478  249498 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:30:27.373517  249498 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:30:27.373597  249498 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-810379 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-810379]
	I1026 08:30:27.511967  249498 provision.go:177] copyRemoteCerts
	I1026 08:30:27.512025  249498 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:30:27.512082  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:27.531635  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:27.636937  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:30:27.661354  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1026 08:30:27.683151  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:30:27.705469  249498 provision.go:87] duration metric: took 351.983612ms to configureAuth
	I1026 08:30:27.705498  249498 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:30:27.705693  249498 config.go:182] Loaded profile config "old-k8s-version-810379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 08:30:27.705805  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:27.729097  249498 main.go:141] libmachine: Using SSH client type: native
	I1026 08:30:27.729428  249498 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33068 <nil> <nil>}
	I1026 08:30:27.729450  249498 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:30:28.028857  249498 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:30:28.028888  249498 machine.go:96] duration metric: took 4.178934587s to provisionDockerMachine
	I1026 08:30:28.028900  249498 start.go:293] postStartSetup for "old-k8s-version-810379" (driver="docker")
	I1026 08:30:28.028913  249498 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:30:28.028973  249498 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:30:28.029029  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:28.049661  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:28.153231  249498 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:30:28.157275  249498 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:30:28.157318  249498 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:30:28.157350  249498 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:30:28.157414  249498 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:30:28.157501  249498 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:30:28.157607  249498 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:30:28.166071  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:30:28.184177  249498 start.go:296] duration metric: took 155.264583ms for postStartSetup
	I1026 08:30:28.184245  249498 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:30:28.184339  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:28.202827  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:28.301554  249498 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:30:28.306473  249498 fix.go:56] duration metric: took 4.777621578s for fixHost
	I1026 08:30:28.306500  249498 start.go:83] releasing machines lock for "old-k8s-version-810379", held for 4.777670293s
	I1026 08:30:28.306596  249498 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-810379
	I1026 08:30:28.324300  249498 ssh_runner.go:195] Run: cat /version.json
	I1026 08:30:28.324334  249498 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:30:28.324391  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:28.324394  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:28.344594  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:28.344648  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:28.496027  249498 ssh_runner.go:195] Run: systemctl --version
	I1026 08:30:28.502785  249498 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:30:28.538472  249498 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:30:28.543798  249498 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:30:28.543869  249498 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:30:28.551913  249498 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:30:28.551942  249498 start.go:495] detecting cgroup driver to use...
	I1026 08:30:28.551969  249498 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:30:28.552002  249498 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:30:28.566520  249498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:30:28.580058  249498 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:30:28.580106  249498 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:30:28.595318  249498 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:30:28.608925  249498 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:30:28.700099  249498 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:30:28.782716  249498 docker.go:234] disabling docker service ...
	I1026 08:30:28.782781  249498 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:30:28.798127  249498 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:30:28.811453  249498 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:30:28.894027  249498 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:30:28.979322  249498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:30:28.992859  249498 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:30:29.008365  249498 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1026 08:30:29.008424  249498 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.017848  249498 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:30:29.017909  249498 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.027055  249498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.035824  249498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.045507  249498 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:30:29.054429  249498 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.063775  249498 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.072842  249498 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:30:29.081862  249498 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:30:29.089840  249498 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:30:29.097681  249498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:30:29.179147  249498 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:30:29.296110  249498 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:30:29.296188  249498 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:30:29.300363  249498 start.go:563] Will wait 60s for crictl version
	I1026 08:30:29.300419  249498 ssh_runner.go:195] Run: which crictl
	I1026 08:30:29.303931  249498 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:30:29.327658  249498 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:30:29.327722  249498 ssh_runner.go:195] Run: crio --version
	I1026 08:30:29.355701  249498 ssh_runner.go:195] Run: crio --version
	I1026 08:30:29.386582  249498 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.1 ...
	I1026 08:30:29.387854  249498 cli_runner.go:164] Run: docker network inspect old-k8s-version-810379 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:30:29.405702  249498 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1026 08:30:29.409951  249498 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:30:29.420434  249498 kubeadm.go:883] updating cluster {Name:old-k8s-version-810379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-810379 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountU
ID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:30:29.420643  249498 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1026 08:30:29.420719  249498 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:30:29.451475  249498 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:30:29.451495  249498 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:30:29.451538  249498 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:30:29.478576  249498 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:30:29.478595  249498 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:30:29.478602  249498 kubeadm.go:934] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1026 08:30:29.478691  249498 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-810379 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-810379 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:30:29.478764  249498 ssh_runner.go:195] Run: crio config
	I1026 08:30:29.525729  249498 cni.go:84] Creating CNI manager for ""
	I1026 08:30:29.525752  249498 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:30:29.525770  249498 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:30:29.525791  249498 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-810379 NodeName:old-k8s-version-810379 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:30:29.525922  249498 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-810379"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:30:29.525991  249498 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1026 08:30:29.534809  249498 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:30:29.534902  249498 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:30:29.542723  249498 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1026 08:30:29.555437  249498 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:30:29.569001  249498 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1026 08:30:29.582829  249498 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:30:29.586871  249498 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:30:29.597675  249498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:30:29.678919  249498 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:30:29.706869  249498 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379 for IP: 192.168.94.2
	I1026 08:30:29.706895  249498 certs.go:195] generating shared ca certs ...
	I1026 08:30:29.706915  249498 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:30:29.707062  249498 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:30:29.707121  249498 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:30:29.707136  249498 certs.go:257] generating profile certs ...
	I1026 08:30:29.707279  249498 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.key
	I1026 08:30:29.707366  249498 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/apiserver.key.328ea5c9
	I1026 08:30:29.707446  249498 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/proxy-client.key
	I1026 08:30:29.707578  249498 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:30:29.707619  249498 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:30:29.707633  249498 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:30:29.707669  249498 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:30:29.707699  249498 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:30:29.707730  249498 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:30:29.707787  249498 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:30:29.708400  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:30:29.729519  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:30:29.750437  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:30:29.771300  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:30:29.793748  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 08:30:29.813900  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 08:30:29.831697  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:30:29.850380  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:30:29.869157  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:30:29.888913  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:30:29.908456  249498 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:30:29.926353  249498 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:30:29.939552  249498 ssh_runner.go:195] Run: openssl version
	I1026 08:30:29.945543  249498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:30:29.953854  249498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:30:29.957619  249498 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:30:29.957685  249498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:30:29.992786  249498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:30:30.001640  249498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:30:30.011033  249498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:30:30.015101  249498 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:30:30.015171  249498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:30:30.049564  249498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:30:30.058211  249498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:30:30.066886  249498 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:30:30.070988  249498 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:30:30.071048  249498 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:30:30.107030  249498 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:30:30.115653  249498 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:30:30.119654  249498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:30:30.155836  249498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:30:30.192626  249498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:30:30.236902  249498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:30:30.280228  249498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:30:30.335434  249498 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:30:30.394177  249498 kubeadm.go:400] StartCluster: {Name:old-k8s-version-810379 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-810379 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:30:30.394301  249498 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:30:30.394373  249498 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:30:30.431785  249498 cri.go:89] found id: "05c780d0419bff37382e6fa31430690a2e55479d8bdba3e10b0e53207ce9c8ea"
	I1026 08:30:30.431809  249498 cri.go:89] found id: "91140716b117cb4eb2f3c6e149ff401f7197babd90f5e046ace64b14ed25aded"
	I1026 08:30:30.431814  249498 cri.go:89] found id: "8d811096167c839c4c04054b21e24c64ba17901168426c75d4408c4ce49c4503"
	I1026 08:30:30.431819  249498 cri.go:89] found id: "b4b1d14a54456f07311716e84e6ac70140f03e1a062261a56e0d6dd936819cec"
	I1026 08:30:30.431832  249498 cri.go:89] found id: ""
	I1026 08:30:30.431901  249498 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 08:30:30.444674  249498 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:30:30Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:30:30.444731  249498 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:30:30.455736  249498 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:30:30.455756  249498 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:30:30.455809  249498 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:30:30.466118  249498 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:30:30.467601  249498 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-810379" does not appear in /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:30:30.468549  249498 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-9429/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-810379" cluster setting kubeconfig missing "old-k8s-version-810379" context setting]
	I1026 08:30:30.469810  249498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:30:30.471791  249498 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:30:30.480232  249498 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1026 08:30:30.480277  249498 kubeadm.go:601] duration metric: took 24.51406ms to restartPrimaryControlPlane
	I1026 08:30:30.480362  249498 kubeadm.go:402] duration metric: took 86.122712ms to StartCluster
	I1026 08:30:30.480405  249498 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:30:30.480489  249498 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:30:30.482592  249498 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:30:30.482858  249498 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:30:30.482932  249498 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:30:30.483052  249498 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-810379"
	I1026 08:30:30.483073  249498 addons.go:238] Setting addon storage-provisioner=true in "old-k8s-version-810379"
	W1026 08:30:30.483083  249498 addons.go:247] addon storage-provisioner should already be in state true
	I1026 08:30:30.483097  249498 config.go:182] Loaded profile config "old-k8s-version-810379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 08:30:30.483115  249498 host.go:66] Checking if "old-k8s-version-810379" exists ...
	I1026 08:30:30.483153  249498 addons.go:69] Setting dashboard=true in profile "old-k8s-version-810379"
	I1026 08:30:30.483167  249498 addons.go:238] Setting addon dashboard=true in "old-k8s-version-810379"
	W1026 08:30:30.483172  249498 addons.go:247] addon dashboard should already be in state true
	I1026 08:30:30.483197  249498 host.go:66] Checking if "old-k8s-version-810379" exists ...
	I1026 08:30:30.483522  249498 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-810379"
	I1026 08:30:30.483551  249498 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-810379"
	I1026 08:30:30.483660  249498 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:30:30.483676  249498 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:30:30.483843  249498 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:30:30.486587  249498 out.go:179] * Verifying Kubernetes components...
	I1026 08:30:30.488493  249498 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:30:30.512219  249498 addons.go:238] Setting addon default-storageclass=true in "old-k8s-version-810379"
	W1026 08:30:30.512240  249498 addons.go:247] addon default-storageclass should already be in state true
	I1026 08:30:30.512285  249498 host.go:66] Checking if "old-k8s-version-810379" exists ...
	I1026 08:30:30.512322  249498 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:30:30.512742  249498 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:30:30.513682  249498 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:30:30.513700  249498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:30:30.513755  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:30.516802  249498 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 08:30:30.518476  249498 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 08:30:27.112886  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:30:27.113391  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:30:27.113449  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:30:27.113507  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:30:27.143161  204716 cri.go:89] found id: "20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:27.143184  204716 cri.go:89] found id: ""
	I1026 08:30:27.143194  204716 logs.go:282] 1 containers: [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca]
	I1026 08:30:27.143274  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:27.147202  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:30:27.147300  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:30:27.174986  204716 cri.go:89] found id: ""
	I1026 08:30:27.175020  204716 logs.go:282] 0 containers: []
	W1026 08:30:27.175036  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:30:27.175043  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:30:27.175101  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:30:27.203936  204716 cri.go:89] found id: ""
	I1026 08:30:27.203961  204716 logs.go:282] 0 containers: []
	W1026 08:30:27.203971  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:30:27.203978  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:30:27.204032  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:30:27.235039  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:27.235065  204716 cri.go:89] found id: ""
	I1026 08:30:27.235074  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:30:27.235142  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:27.239257  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:30:27.239336  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:30:27.268111  204716 cri.go:89] found id: ""
	I1026 08:30:27.268138  204716 logs.go:282] 0 containers: []
	W1026 08:30:27.268190  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:30:27.268204  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:30:27.268281  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:30:27.296081  204716 cri.go:89] found id: "ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:27.296102  204716 cri.go:89] found id: ""
	I1026 08:30:27.296109  204716 logs.go:282] 1 containers: [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03]
	I1026 08:30:27.296164  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:27.300261  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:30:27.300342  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:30:27.330207  204716 cri.go:89] found id: ""
	I1026 08:30:27.330232  204716 logs.go:282] 0 containers: []
	W1026 08:30:27.330240  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:30:27.330258  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:30:27.330315  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:30:27.358641  204716 cri.go:89] found id: ""
	I1026 08:30:27.358666  204716 logs.go:282] 0 containers: []
	W1026 08:30:27.358676  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:30:27.358686  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:30:27.358701  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:30:27.390938  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:30:27.390966  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:30:27.478708  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:30:27.478739  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:30:27.493848  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:30:27.493880  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:30:27.555063  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:30:27.555084  204716 logs.go:123] Gathering logs for kube-apiserver [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca] ...
	I1026 08:30:27.555104  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:27.589735  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:30:27.589762  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:27.661497  204716 logs.go:123] Gathering logs for kube-controller-manager [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03] ...
	I1026 08:30:27.661534  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:27.693080  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:30:27.693116  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:30:30.253042  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:30:30.253454  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:30:30.253513  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:30:30.253596  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:30:30.284393  204716 cri.go:89] found id: "20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:30.284420  204716 cri.go:89] found id: ""
	I1026 08:30:30.284430  204716 logs.go:282] 1 containers: [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca]
	I1026 08:30:30.284486  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:30.289384  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:30:30.289454  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:30:30.323557  204716 cri.go:89] found id: ""
	I1026 08:30:30.323585  204716 logs.go:282] 0 containers: []
	W1026 08:30:30.323595  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:30:30.323603  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:30:30.323664  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:30:30.360123  204716 cri.go:89] found id: ""
	I1026 08:30:30.360152  204716 logs.go:282] 0 containers: []
	W1026 08:30:30.360161  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:30:30.360169  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:30:30.360298  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:30:30.395314  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:30.395337  204716 cri.go:89] found id: ""
	I1026 08:30:30.395348  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:30:30.395405  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:30.399431  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:30:30.399502  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:30:30.429438  204716 cri.go:89] found id: ""
	I1026 08:30:30.429465  204716 logs.go:282] 0 containers: []
	W1026 08:30:30.429476  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:30:30.429484  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:30:30.429544  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:30:30.463904  204716 cri.go:89] found id: "ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:30.463926  204716 cri.go:89] found id: ""
	I1026 08:30:30.463936  204716 logs.go:282] 1 containers: [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03]
	I1026 08:30:30.463991  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:30.468757  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:30:30.468830  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:30:30.511864  204716 cri.go:89] found id: ""
	I1026 08:30:30.511895  204716 logs.go:282] 0 containers: []
	W1026 08:30:30.511905  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:30:30.511913  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:30:30.511963  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:30:30.562529  204716 cri.go:89] found id: ""
	I1026 08:30:30.562557  204716 logs.go:282] 0 containers: []
	W1026 08:30:30.562567  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:30:30.562577  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:30:30.562598  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:30:30.651627  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:30:30.651649  204716 logs.go:123] Gathering logs for kube-apiserver [20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca] ...
	I1026 08:30:30.651669  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:30.691299  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:30:30.691327  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:30.757591  204716 logs.go:123] Gathering logs for kube-controller-manager [ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03] ...
	I1026 08:30:30.757640  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:30.796518  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:30:30.796555  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:30:30.852003  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:30:30.852033  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:30:30.885204  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:30:30.885243  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:30:31.009602  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:30:31.009649  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W1026 08:30:29.079478  243672 node_ready.go:57] node "embed-certs-752315" has "Ready":"False" status (will retry)
	W1026 08:30:31.079785  243672 node_ready.go:57] node "embed-certs-752315" has "Ready":"False" status (will retry)
	I1026 08:30:30.519753  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 08:30:30.519772  249498 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 08:30:30.519844  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:30.547359  249498 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:30:30.547385  249498 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:30:30.547451  249498 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:30:30.557519  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:30.558540  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:30.586596  249498 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:30:30.663447  249498 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:30:30.683017  249498 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-810379" to be "Ready" ...
	I1026 08:30:30.687369  249498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:30:30.688408  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 08:30:30.688437  249498 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 08:30:30.705463  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 08:30:30.705481  249498 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 08:30:30.711293  249498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:30:30.722240  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 08:30:30.722306  249498 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 08:30:30.741407  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 08:30:30.741430  249498 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 08:30:30.759917  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 08:30:30.759951  249498 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 08:30:30.780977  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 08:30:30.781004  249498 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 08:30:30.800650  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 08:30:30.800674  249498 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 08:30:30.817828  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 08:30:30.817857  249498 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 08:30:30.831510  249498 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 08:30:30.831534  249498 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 08:30:30.844346  249498 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 08:30:32.713748  249498 node_ready.go:49] node "old-k8s-version-810379" is "Ready"
	I1026 08:30:32.713786  249498 node_ready.go:38] duration metric: took 2.030730112s for node "old-k8s-version-810379" to be "Ready" ...
	I1026 08:30:32.713802  249498 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:30:32.713854  249498 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:30:33.485025  249498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.797581591s)
	I1026 08:30:33.485131  249498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.773803709s)
	I1026 08:30:33.928488  249498 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (3.084082268s)
	I1026 08:30:33.928520  249498 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.214641751s)
	I1026 08:30:33.928549  249498 api_server.go:72] duration metric: took 3.445662761s to wait for apiserver process to appear ...
	I1026 08:30:33.928557  249498 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:30:33.928576  249498 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1026 08:30:33.930418  249498 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p old-k8s-version-810379 addons enable metrics-server
	
	I1026 08:30:33.931890  249498 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, dashboard
	I1026 08:30:33.526345  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	W1026 08:30:33.578910  243672 node_ready.go:57] node "embed-certs-752315" has "Ready":"False" status (will retry)
	W1026 08:30:35.579075  243672 node_ready.go:57] node "embed-certs-752315" has "Ready":"False" status (will retry)
	I1026 08:30:36.079434  243672 node_ready.go:49] node "embed-certs-752315" is "Ready"
	I1026 08:30:36.079467  243672 node_ready.go:38] duration metric: took 11.00354685s for node "embed-certs-752315" to be "Ready" ...
	I1026 08:30:36.079485  243672 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:30:36.079588  243672 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:30:36.094564  243672 api_server.go:72] duration metric: took 11.310212491s to wait for apiserver process to appear ...
	I1026 08:30:36.094593  243672 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:30:36.094624  243672 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 08:30:36.099146  243672 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 08:30:36.100065  243672 api_server.go:141] control plane version: v1.34.1
	I1026 08:30:36.100088  243672 api_server.go:131] duration metric: took 5.489384ms to wait for apiserver health ...
	I1026 08:30:36.100096  243672 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:30:36.106192  243672 system_pods.go:59] 8 kube-system pods found
	I1026 08:30:36.106238  243672 system_pods.go:61] "coredns-66bc5c9577-jktn8" [9a6b6a27-7914-4afa-9aee-3ef807310513] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:30:36.106263  243672 system_pods.go:61] "etcd-embed-certs-752315" [d7872377-f318-41fc-aee4-c7fb1fc11cf8] Running
	I1026 08:30:36.106283  243672 system_pods.go:61] "kindnet-m4lzl" [2bad6af2-87f0-4874-957b-80da1acf3644] Running
	I1026 08:30:36.106294  243672 system_pods.go:61] "kube-apiserver-embed-certs-752315" [6e127291-4127-4650-b294-a2b0c23d5589] Running
	I1026 08:30:36.106300  243672 system_pods.go:61] "kube-controller-manager-embed-certs-752315" [4522e23b-e101-4ca6-9e2b-294764e7a1ec] Running
	I1026 08:30:36.106304  243672 system_pods.go:61] "kube-proxy-5bf98" [8d092c78-0205-4b69-84bd-bb2b1ec33f17] Running
	I1026 08:30:36.106320  243672 system_pods.go:61] "kube-scheduler-embed-certs-752315" [d6a8357c-f4a8-4402-818a-1035ad27ccf8] Running
	I1026 08:30:36.106328  243672 system_pods.go:61] "storage-provisioner" [0c8393f3-2b62-4bc8-b3cf-a43059d8cdee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:30:36.106336  243672 system_pods.go:74] duration metric: took 6.234119ms to wait for pod list to return data ...
	I1026 08:30:36.106347  243672 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:30:36.108711  243672 default_sa.go:45] found service account: "default"
	I1026 08:30:36.108738  243672 default_sa.go:55] duration metric: took 2.384298ms for default service account to be created ...
	I1026 08:30:36.108749  243672 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:30:36.206197  243672 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:36.206232  243672 system_pods.go:89] "coredns-66bc5c9577-jktn8" [9a6b6a27-7914-4afa-9aee-3ef807310513] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:30:36.206238  243672 system_pods.go:89] "etcd-embed-certs-752315" [d7872377-f318-41fc-aee4-c7fb1fc11cf8] Running
	I1026 08:30:36.206244  243672 system_pods.go:89] "kindnet-m4lzl" [2bad6af2-87f0-4874-957b-80da1acf3644] Running
	I1026 08:30:36.206261  243672 system_pods.go:89] "kube-apiserver-embed-certs-752315" [6e127291-4127-4650-b294-a2b0c23d5589] Running
	I1026 08:30:36.206267  243672 system_pods.go:89] "kube-controller-manager-embed-certs-752315" [4522e23b-e101-4ca6-9e2b-294764e7a1ec] Running
	I1026 08:30:36.206272  243672 system_pods.go:89] "kube-proxy-5bf98" [8d092c78-0205-4b69-84bd-bb2b1ec33f17] Running
	I1026 08:30:36.206276  243672 system_pods.go:89] "kube-scheduler-embed-certs-752315" [d6a8357c-f4a8-4402-818a-1035ad27ccf8] Running
	I1026 08:30:36.206284  243672 system_pods.go:89] "storage-provisioner" [0c8393f3-2b62-4bc8-b3cf-a43059d8cdee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:30:36.206326  243672 retry.go:31] will retry after 208.636136ms: missing components: kube-dns
	I1026 08:30:36.418895  243672 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:36.418925  243672 system_pods.go:89] "coredns-66bc5c9577-jktn8" [9a6b6a27-7914-4afa-9aee-3ef807310513] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:30:36.418931  243672 system_pods.go:89] "etcd-embed-certs-752315" [d7872377-f318-41fc-aee4-c7fb1fc11cf8] Running
	I1026 08:30:36.418936  243672 system_pods.go:89] "kindnet-m4lzl" [2bad6af2-87f0-4874-957b-80da1acf3644] Running
	I1026 08:30:36.418940  243672 system_pods.go:89] "kube-apiserver-embed-certs-752315" [6e127291-4127-4650-b294-a2b0c23d5589] Running
	I1026 08:30:36.418943  243672 system_pods.go:89] "kube-controller-manager-embed-certs-752315" [4522e23b-e101-4ca6-9e2b-294764e7a1ec] Running
	I1026 08:30:36.418946  243672 system_pods.go:89] "kube-proxy-5bf98" [8d092c78-0205-4b69-84bd-bb2b1ec33f17] Running
	I1026 08:30:36.418949  243672 system_pods.go:89] "kube-scheduler-embed-certs-752315" [d6a8357c-f4a8-4402-818a-1035ad27ccf8] Running
	I1026 08:30:36.418954  243672 system_pods.go:89] "storage-provisioner" [0c8393f3-2b62-4bc8-b3cf-a43059d8cdee] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:30:36.418967  243672 retry.go:31] will retry after 348.649871ms: missing components: kube-dns
	I1026 08:30:36.774551  243672 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:36.774577  243672 system_pods.go:89] "coredns-66bc5c9577-jktn8" [9a6b6a27-7914-4afa-9aee-3ef807310513] Running
	I1026 08:30:36.774583  243672 system_pods.go:89] "etcd-embed-certs-752315" [d7872377-f318-41fc-aee4-c7fb1fc11cf8] Running
	I1026 08:30:36.774586  243672 system_pods.go:89] "kindnet-m4lzl" [2bad6af2-87f0-4874-957b-80da1acf3644] Running
	I1026 08:30:36.774590  243672 system_pods.go:89] "kube-apiserver-embed-certs-752315" [6e127291-4127-4650-b294-a2b0c23d5589] Running
	I1026 08:30:36.774596  243672 system_pods.go:89] "kube-controller-manager-embed-certs-752315" [4522e23b-e101-4ca6-9e2b-294764e7a1ec] Running
	I1026 08:30:36.774600  243672 system_pods.go:89] "kube-proxy-5bf98" [8d092c78-0205-4b69-84bd-bb2b1ec33f17] Running
	I1026 08:30:36.774602  243672 system_pods.go:89] "kube-scheduler-embed-certs-752315" [d6a8357c-f4a8-4402-818a-1035ad27ccf8] Running
	I1026 08:30:36.774606  243672 system_pods.go:89] "storage-provisioner" [0c8393f3-2b62-4bc8-b3cf-a43059d8cdee] Running
	I1026 08:30:36.774613  243672 system_pods.go:126] duration metric: took 665.858386ms to wait for k8s-apps to be running ...
	I1026 08:30:36.774620  243672 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:30:36.774668  243672 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:30:36.788372  243672 system_svc.go:56] duration metric: took 13.742059ms WaitForService to wait for kubelet
	I1026 08:30:36.788399  243672 kubeadm.go:586] duration metric: took 12.004052402s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:30:36.788419  243672 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:30:36.791605  243672 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:30:36.791634  243672 node_conditions.go:123] node cpu capacity is 8
	I1026 08:30:36.791655  243672 node_conditions.go:105] duration metric: took 3.222834ms to run NodePressure ...
	I1026 08:30:36.791668  243672 start.go:241] waiting for startup goroutines ...
	I1026 08:30:36.791684  243672 start.go:246] waiting for cluster config update ...
	I1026 08:30:36.791699  243672 start.go:255] writing updated cluster config ...
	I1026 08:30:36.792003  243672 ssh_runner.go:195] Run: rm -f paused
	I1026 08:30:36.796004  243672 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:30:36.799814  243672 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jktn8" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:36.803815  243672 pod_ready.go:94] pod "coredns-66bc5c9577-jktn8" is "Ready"
	I1026 08:30:36.803834  243672 pod_ready.go:86] duration metric: took 4.001674ms for pod "coredns-66bc5c9577-jktn8" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:36.805662  243672 pod_ready.go:83] waiting for pod "etcd-embed-certs-752315" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:36.809658  243672 pod_ready.go:94] pod "etcd-embed-certs-752315" is "Ready"
	I1026 08:30:36.809684  243672 pod_ready.go:86] duration metric: took 4.005997ms for pod "etcd-embed-certs-752315" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:36.811937  243672 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-752315" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:36.815493  243672 pod_ready.go:94] pod "kube-apiserver-embed-certs-752315" is "Ready"
	I1026 08:30:36.815511  243672 pod_ready.go:86] duration metric: took 3.55248ms for pod "kube-apiserver-embed-certs-752315" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:36.817304  243672 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-752315" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:37.201395  243672 pod_ready.go:94] pod "kube-controller-manager-embed-certs-752315" is "Ready"
	I1026 08:30:37.201425  243672 pod_ready.go:86] duration metric: took 384.099277ms for pod "kube-controller-manager-embed-certs-752315" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:37.401135  243672 pod_ready.go:83] waiting for pod "kube-proxy-5bf98" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:37.800593  243672 pod_ready.go:94] pod "kube-proxy-5bf98" is "Ready"
	I1026 08:30:37.800619  243672 pod_ready.go:86] duration metric: took 399.459463ms for pod "kube-proxy-5bf98" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:38.000823  243672 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-752315" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:33.932884  249498 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1026 08:30:33.933415  249498 addons.go:514] duration metric: took 3.450492886s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1026 08:30:33.934186  249498 api_server.go:141] control plane version: v1.28.0
	I1026 08:30:33.934208  249498 api_server.go:131] duration metric: took 5.64571ms to wait for apiserver health ...
	I1026 08:30:33.934215  249498 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:30:33.938150  249498 system_pods.go:59] 8 kube-system pods found
	I1026 08:30:33.938176  249498 system_pods.go:61] "coredns-5dd5756b68-wrpqk" [52d85487-6b55-4451-8732-00bc722bbd41] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:30:33.938183  249498 system_pods.go:61] "etcd-old-k8s-version-810379" [16aa39cd-c748-4594-8fe5-08c626e2fb54] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:30:33.938189  249498 system_pods.go:61] "kindnet-6mfc2" [f468c1c2-21f5-4491-86c7-1237c1299721] Running
	I1026 08:30:33.938197  249498 system_pods.go:61] "kube-apiserver-old-k8s-version-810379" [3c1fcd76-f436-43b4-9e6f-e37a37e6805c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:30:33.938224  249498 system_pods.go:61] "kube-controller-manager-old-k8s-version-810379" [8c9b750c-1f50-407f-9a2d-2bb3f4cb3e1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:30:33.938236  249498 system_pods.go:61] "kube-proxy-455nz" [89cbf0d8-1b3a-4388-9a19-6130b61b8271] Running
	I1026 08:30:33.938277  249498 system_pods.go:61] "kube-scheduler-old-k8s-version-810379" [c5c15f3b-1d5d-402b-a01c-5c4eb98fa0fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:30:33.938301  249498 system_pods.go:61] "storage-provisioner" [0d8247bb-b952-4d45-9345-2f54d2a42b27] Running
	I1026 08:30:33.938308  249498 system_pods.go:74] duration metric: took 4.086711ms to wait for pod list to return data ...
	I1026 08:30:33.938318  249498 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:30:33.940141  249498 default_sa.go:45] found service account: "default"
	I1026 08:30:33.940158  249498 default_sa.go:55] duration metric: took 1.83457ms for default service account to be created ...
	I1026 08:30:33.940165  249498 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:30:33.943309  249498 system_pods.go:86] 8 kube-system pods found
	I1026 08:30:33.943337  249498 system_pods.go:89] "coredns-5dd5756b68-wrpqk" [52d85487-6b55-4451-8732-00bc722bbd41] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:30:33.943348  249498 system_pods.go:89] "etcd-old-k8s-version-810379" [16aa39cd-c748-4594-8fe5-08c626e2fb54] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:30:33.943358  249498 system_pods.go:89] "kindnet-6mfc2" [f468c1c2-21f5-4491-86c7-1237c1299721] Running
	I1026 08:30:33.943370  249498 system_pods.go:89] "kube-apiserver-old-k8s-version-810379" [3c1fcd76-f436-43b4-9e6f-e37a37e6805c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:30:33.943383  249498 system_pods.go:89] "kube-controller-manager-old-k8s-version-810379" [8c9b750c-1f50-407f-9a2d-2bb3f4cb3e1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:30:33.943389  249498 system_pods.go:89] "kube-proxy-455nz" [89cbf0d8-1b3a-4388-9a19-6130b61b8271] Running
	I1026 08:30:33.943398  249498 system_pods.go:89] "kube-scheduler-old-k8s-version-810379" [c5c15f3b-1d5d-402b-a01c-5c4eb98fa0fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:30:33.943403  249498 system_pods.go:89] "storage-provisioner" [0d8247bb-b952-4d45-9345-2f54d2a42b27] Running
	I1026 08:30:33.943411  249498 system_pods.go:126] duration metric: took 3.24062ms to wait for k8s-apps to be running ...
	I1026 08:30:33.943420  249498 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:30:33.943464  249498 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:30:33.958037  249498 system_svc.go:56] duration metric: took 14.605999ms WaitForService to wait for kubelet
	I1026 08:30:33.958064  249498 kubeadm.go:586] duration metric: took 3.475179595s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:30:33.958081  249498 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:30:33.960598  249498 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:30:33.960626  249498 node_conditions.go:123] node cpu capacity is 8
	I1026 08:30:33.960640  249498 node_conditions.go:105] duration metric: took 2.553941ms to run NodePressure ...
	I1026 08:30:33.960650  249498 start.go:241] waiting for startup goroutines ...
	I1026 08:30:33.960657  249498 start.go:246] waiting for cluster config update ...
	I1026 08:30:33.960666  249498 start.go:255] writing updated cluster config ...
	I1026 08:30:33.960900  249498 ssh_runner.go:195] Run: rm -f paused
	I1026 08:30:33.964944  249498 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:30:33.969215  249498 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-wrpqk" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 08:30:35.975088  249498 pod_ready.go:104] pod "coredns-5dd5756b68-wrpqk" is not "Ready", error: <nil>
	I1026 08:30:38.400841  243672 pod_ready.go:94] pod "kube-scheduler-embed-certs-752315" is "Ready"
	I1026 08:30:38.400869  243672 pod_ready.go:86] duration metric: took 400.020932ms for pod "kube-scheduler-embed-certs-752315" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:30:38.400881  243672 pod_ready.go:40] duration metric: took 1.604845595s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:30:38.443363  243672 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:30:38.445285  243672 out.go:179] * Done! kubectl is now configured to use "embed-certs-752315" cluster and "default" namespace by default
	I1026 08:30:38.527398  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1026 08:30:38.527468  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:30:38.527529  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:30:38.557099  204716 cri.go:89] found id: "d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:30:38.557122  204716 cri.go:89] found id: "20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca"
	I1026 08:30:38.557129  204716 cri.go:89] found id: ""
	I1026 08:30:38.557139  204716 logs.go:282] 2 containers: [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a 20ef6cad69e7b270ad9111bf3db3ba2dee577ab5a3ee230959c47852ca5ed4ca]
	I1026 08:30:38.557199  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:38.561422  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:38.565363  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:30:38.565419  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:30:38.597527  204716 cri.go:89] found id: ""
	I1026 08:30:38.597554  204716 logs.go:282] 0 containers: []
	W1026 08:30:38.597564  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:30:38.597571  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:30:38.597637  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:30:38.628338  204716 cri.go:89] found id: ""
	I1026 08:30:38.628367  204716 logs.go:282] 0 containers: []
	W1026 08:30:38.628377  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:30:38.628384  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:30:38.628449  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:30:38.657218  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:30:38.657239  204716 cri.go:89] found id: ""
	I1026 08:30:38.657260  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:30:38.657331  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:38.661240  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:30:38.661324  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:30:38.688910  204716 cri.go:89] found id: ""
	I1026 08:30:38.688939  204716 logs.go:282] 0 containers: []
	W1026 08:30:38.688951  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:30:38.688957  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:30:38.689013  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:30:38.715730  204716 cri.go:89] found id: "c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:30:38.715755  204716 cri.go:89] found id: "ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03"
	I1026 08:30:38.715759  204716 cri.go:89] found id: ""
	I1026 08:30:38.715769  204716 logs.go:282] 2 containers: [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d ed3007f15f10f570c75afddc0e880fc071c836c17c8c036dc818199f49b54a03]
	I1026 08:30:38.715823  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:38.719965  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:30:38.723576  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:30:38.723624  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:30:38.750131  204716 cri.go:89] found id: ""
	I1026 08:30:38.750159  204716 logs.go:282] 0 containers: []
	W1026 08:30:38.750169  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:30:38.750177  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:30:38.750235  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:30:38.776850  204716 cri.go:89] found id: ""
	I1026 08:30:38.776875  204716 logs.go:282] 0 containers: []
	W1026 08:30:38.776883  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:30:38.776898  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:30:38.776909  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:30:38.475376  249498 pod_ready.go:104] pod "coredns-5dd5756b68-wrpqk" is not "Ready", error: <nil>
	W1026 08:30:40.475912  249498 pod_ready.go:104] pod "coredns-5dd5756b68-wrpqk" is not "Ready", error: <nil>
	W1026 08:30:42.974517  249498 pod_ready.go:104] pod "coredns-5dd5756b68-wrpqk" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 26 08:30:36 embed-certs-752315 crio[775]: time="2025-10-26T08:30:36.078179048Z" level=info msg="Starting container: 613ce5f8265362cf8d6891cb9c4cefec969e0f2fcd9ac758c61b35a60eb864a6" id=8912db28-2506-4c3d-8371-504aa39034e5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:30:36 embed-certs-752315 crio[775]: time="2025-10-26T08:30:36.080405583Z" level=info msg="Started container" PID=1859 containerID=613ce5f8265362cf8d6891cb9c4cefec969e0f2fcd9ac758c61b35a60eb864a6 description=kube-system/coredns-66bc5c9577-jktn8/coredns id=8912db28-2506-4c3d-8371-504aa39034e5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=67aa7dd6258dbf43a49c6c9e2412edd8c32955ac3091b58709cff23667899b67
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.898319096Z" level=info msg="Running pod sandbox: default/busybox/POD" id=3d2b083f-e1e0-489e-8e05-e0b57a49706e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.898426171Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.90288739Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4a81c3a960c8a9fabe447de4845bafcd920fded4b21009976343e4c2a1a11c94 UID:5d90b3c9-8de3-47ec-b300-fda7d1a2dcf4 NetNS:/var/run/netns/a1a02512-612a-48a5-ac41-1797a2294e16 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006fc370}] Aliases:map[]}"
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.902920996Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.912328229Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:4a81c3a960c8a9fabe447de4845bafcd920fded4b21009976343e4c2a1a11c94 UID:5d90b3c9-8de3-47ec-b300-fda7d1a2dcf4 NetNS:/var/run/netns/a1a02512-612a-48a5-ac41-1797a2294e16 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0006fc370}] Aliases:map[]}"
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.912460982Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.913154505Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.913938837Z" level=info msg="Ran pod sandbox 4a81c3a960c8a9fabe447de4845bafcd920fded4b21009976343e4c2a1a11c94 with infra container: default/busybox/POD" id=3d2b083f-e1e0-489e-8e05-e0b57a49706e name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.915184916Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=cf28fd7e-d4b0-4fdd-aa6a-4cba7dd755fb name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.915328813Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=cf28fd7e-d4b0-4fdd-aa6a-4cba7dd755fb name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.915360239Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=cf28fd7e-d4b0-4fdd-aa6a-4cba7dd755fb name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.916067496Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b2c9d223-314e-4e10-9851-575c58768b21 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:30:38 embed-certs-752315 crio[775]: time="2025-10-26T08:30:38.917782474Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 08:30:40 embed-certs-752315 crio[775]: time="2025-10-26T08:30:40.181150957Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b2c9d223-314e-4e10-9851-575c58768b21 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:30:40 embed-certs-752315 crio[775]: time="2025-10-26T08:30:40.18189613Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=f8de2628-6337-4387-9d59-d1847483c7c6 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:30:40 embed-certs-752315 crio[775]: time="2025-10-26T08:30:40.183164937Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=684d35f1-ec1d-4edc-8642-e070e87f0b49 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:30:40 embed-certs-752315 crio[775]: time="2025-10-26T08:30:40.186505263Z" level=info msg="Creating container: default/busybox/busybox" id=eac9a1e2-92ad-4fe7-9573-023e585d55bf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:30:40 embed-certs-752315 crio[775]: time="2025-10-26T08:30:40.186622337Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:30:40 embed-certs-752315 crio[775]: time="2025-10-26T08:30:40.190225903Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:30:40 embed-certs-752315 crio[775]: time="2025-10-26T08:30:40.190674967Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:30:40 embed-certs-752315 crio[775]: time="2025-10-26T08:30:40.220577709Z" level=info msg="Created container 3ded4668bf5ff949d07d0d557caa96da463774e6f7eeb6af9e923e7b44235e2a: default/busybox/busybox" id=eac9a1e2-92ad-4fe7-9573-023e585d55bf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:30:40 embed-certs-752315 crio[775]: time="2025-10-26T08:30:40.221142951Z" level=info msg="Starting container: 3ded4668bf5ff949d07d0d557caa96da463774e6f7eeb6af9e923e7b44235e2a" id=60769e64-47da-4177-b102-990d5ab1cc50 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:30:40 embed-certs-752315 crio[775]: time="2025-10-26T08:30:40.222864765Z" level=info msg="Started container" PID=1938 containerID=3ded4668bf5ff949d07d0d557caa96da463774e6f7eeb6af9e923e7b44235e2a description=default/busybox/busybox id=60769e64-47da-4177-b102-990d5ab1cc50 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4a81c3a960c8a9fabe447de4845bafcd920fded4b21009976343e4c2a1a11c94
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	3ded4668bf5ff       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   4a81c3a960c8a       busybox                                      default
	613ce5f826536       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   67aa7dd6258db       coredns-66bc5c9577-jktn8                     kube-system
	75a7b4f984851       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   413c591adb727       storage-provisioner                          kube-system
	1237b1106128b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   dd89b28d9ace5       kube-proxy-5bf98                             kube-system
	83f873fae615b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   8e6f4374325b3       kindnet-m4lzl                                kube-system
	3319b03dc46d8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   2736f71e891b2       kube-controller-manager-embed-certs-752315   kube-system
	ca8a762eec884       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   24a310cb38d6c       kube-apiserver-embed-certs-752315            kube-system
	61862e7dc1b6d       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   02671d88622b0       etcd-embed-certs-752315                      kube-system
	b9e9e130230f3       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   ed2eed877f138       kube-scheduler-embed-certs-752315            kube-system
	
	
	==> coredns [613ce5f8265362cf8d6891cb9c4cefec969e0f2fcd9ac758c61b35a60eb864a6] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57883 - 52957 "HINFO IN 3098802038795600457.9043978679892843741. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.046140957s
	
	
	==> describe nodes <==
	Name:               embed-certs-752315
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-752315
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=embed-certs-752315
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_30_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:30:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-752315
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:30:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:30:35 +0000   Sun, 26 Oct 2025 08:30:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:30:35 +0000   Sun, 26 Oct 2025 08:30:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:30:35 +0000   Sun, 26 Oct 2025 08:30:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:30:35 +0000   Sun, 26 Oct 2025 08:30:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-752315
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                cae690de-b1ed-4dcd-8194-03992c24069f
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-jktn8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-752315                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-m4lzl                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-752315             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-embed-certs-752315    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-5bf98                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-752315             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node embed-certs-752315 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node embed-certs-752315 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node embed-certs-752315 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node embed-certs-752315 event: Registered Node embed-certs-752315 in Controller
	  Normal  NodeReady                13s   kubelet          Node embed-certs-752315 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [61862e7dc1b6dc6af544acc218b960158e65800e8b621da328cdec06d1135a5d] <==
	{"level":"warn","ts":"2025-10-26T08:30:15.595757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.602070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53710","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.610489Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.617119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53746","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.623230Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.629438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.641910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.648439Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.655014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.661279Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.669148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.680047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.687045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.693544Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.699835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53960","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.706261Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.713128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.719662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.726296Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.733888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54058","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.746503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.750295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54086","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.756966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.764875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:30:15.828231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:30:48 up  1:13,  0 user,  load average: 3.63, 3.12, 1.95
	Linux embed-certs-752315 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [83f873fae615ba318290d561c7279834807cd499bc56a2f10f98aea51136cc9b] <==
	I1026 08:30:24.926764       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:30:24.927076       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1026 08:30:24.927284       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:30:24.927307       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:30:24.927331       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:30:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:30:25.220581       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:30:25.221217       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:30:25.221272       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:30:25.221875       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:30:25.721391       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:30:25.721414       1 metrics.go:72] Registering metrics
	I1026 08:30:25.721473       1 controller.go:711] "Syncing nftables rules"
	I1026 08:30:35.191375       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:30:35.191448       1 main.go:301] handling current node
	I1026 08:30:45.190101       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:30:45.190135       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ca8a762eec884347445aa43efc1602f906233800969d56a835b89e0124a6d5e9] <==
	E1026 08:30:16.391194       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1026 08:30:16.404722       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 08:30:16.406952       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:30:16.406970       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 08:30:16.411683       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:30:16.412049       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:30:16.595835       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:30:17.207114       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 08:30:17.210887       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 08:30:17.210904       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:30:17.668381       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:30:17.701585       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:30:17.813510       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 08:30:17.819709       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1026 08:30:17.820784       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:30:17.825295       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:30:18.609171       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:30:18.775848       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:30:18.785590       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 08:30:18.792305       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 08:30:24.311469       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 08:30:24.413616       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:30:24.418454       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:30:24.710710       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1026 08:30:46.706323       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:54996: use of closed network connection
	
	
	==> kube-controller-manager [3319b03dc46d83c87bdb16c77dfb474a14e1540a7a1a2e00c300d7ec693aba6d] <==
	I1026 08:30:23.570718       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-752315" podCIDRs=["10.244.0.0/24"]
	I1026 08:30:23.579772       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:30:23.606998       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1026 08:30:23.607000       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 08:30:23.607118       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 08:30:23.607167       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 08:30:23.607210       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 08:30:23.607421       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 08:30:23.607436       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 08:30:23.607658       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 08:30:23.607718       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 08:30:23.607795       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 08:30:23.607965       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 08:30:23.608031       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 08:30:23.608155       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 08:30:23.608163       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 08:30:23.608209       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 08:30:23.608221       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 08:30:23.609020       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 08:30:23.612135       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 08:30:23.613345       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:30:23.619409       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:30:23.626720       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 08:30:23.632948       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:30:38.549894       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1237b1106128b99caa5a2bafb83ec259cea24ebb5ad4825cf820445ea7c0992c] <==
	I1026 08:30:24.741284       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:30:24.814811       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:30:24.916096       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:30:24.916137       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1026 08:30:24.916240       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:30:24.940637       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:30:24.940694       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:30:24.946050       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:30:24.946592       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:30:24.946671       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:30:24.948307       1 config.go:200] "Starting service config controller"
	I1026 08:30:24.948334       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:30:24.948362       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:30:24.948368       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:30:24.948396       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:30:24.948411       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:30:24.948460       1 config.go:309] "Starting node config controller"
	I1026 08:30:24.948470       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:30:25.049133       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:30:25.049163       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 08:30:25.049135       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:30:25.049203       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [b9e9e130230f397d4c023cfbc1ec3f119b6ea2655cc18dd2fc814ebd54df9ff1] <==
	E1026 08:30:16.257744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 08:30:16.257869       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 08:30:16.258041       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:30:16.258159       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 08:30:16.258323       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:30:16.258318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:30:16.258436       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 08:30:16.258451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:30:16.258521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:30:16.258526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:30:16.258769       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:30:16.258821       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:30:16.258819       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:30:16.258936       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:30:16.258954       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 08:30:16.258825       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:30:17.067991       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:30:17.125413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:30:17.199012       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 08:30:17.292780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 08:30:17.292780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:30:17.301167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 08:30:17.383836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 08:30:17.422994       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1026 08:30:17.754183       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:30:19 embed-certs-752315 kubelet[1316]: I1026 08:30:19.652120    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-752315" podStartSLOduration=1.652099211 podStartE2EDuration="1.652099211s" podCreationTimestamp="2025-10-26 08:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:19.641160071 +0000 UTC m=+1.125055423" watchObservedRunningTime="2025-10-26 08:30:19.652099211 +0000 UTC m=+1.135994569"
	Oct 26 08:30:19 embed-certs-752315 kubelet[1316]: I1026 08:30:19.652294    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-752315" podStartSLOduration=1.652281868 podStartE2EDuration="1.652281868s" podCreationTimestamp="2025-10-26 08:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:19.652194374 +0000 UTC m=+1.136089723" watchObservedRunningTime="2025-10-26 08:30:19.652281868 +0000 UTC m=+1.136177211"
	Oct 26 08:30:19 embed-certs-752315 kubelet[1316]: I1026 08:30:19.661279    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-752315" podStartSLOduration=1.661241554 podStartE2EDuration="1.661241554s" podCreationTimestamp="2025-10-26 08:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:19.661078731 +0000 UTC m=+1.144974082" watchObservedRunningTime="2025-10-26 08:30:19.661241554 +0000 UTC m=+1.145136911"
	Oct 26 08:30:19 embed-certs-752315 kubelet[1316]: I1026 08:30:19.673324    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-752315" podStartSLOduration=1.673301386 podStartE2EDuration="1.673301386s" podCreationTimestamp="2025-10-26 08:30:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:19.672467809 +0000 UTC m=+1.156363166" watchObservedRunningTime="2025-10-26 08:30:19.673301386 +0000 UTC m=+1.157196745"
	Oct 26 08:30:23 embed-certs-752315 kubelet[1316]: I1026 08:30:23.624279    1316 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 08:30:23 embed-certs-752315 kubelet[1316]: I1026 08:30:23.625018    1316 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 08:30:24 embed-certs-752315 kubelet[1316]: I1026 08:30:24.420319    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/2bad6af2-87f0-4874-957b-80da1acf3644-cni-cfg\") pod \"kindnet-m4lzl\" (UID: \"2bad6af2-87f0-4874-957b-80da1acf3644\") " pod="kube-system/kindnet-m4lzl"
	Oct 26 08:30:24 embed-certs-752315 kubelet[1316]: I1026 08:30:24.420531    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8d092c78-0205-4b69-84bd-bb2b1ec33f17-kube-proxy\") pod \"kube-proxy-5bf98\" (UID: \"8d092c78-0205-4b69-84bd-bb2b1ec33f17\") " pod="kube-system/kube-proxy-5bf98"
	Oct 26 08:30:24 embed-certs-752315 kubelet[1316]: I1026 08:30:24.420577    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6cnv\" (UniqueName: \"kubernetes.io/projected/8d092c78-0205-4b69-84bd-bb2b1ec33f17-kube-api-access-w6cnv\") pod \"kube-proxy-5bf98\" (UID: \"8d092c78-0205-4b69-84bd-bb2b1ec33f17\") " pod="kube-system/kube-proxy-5bf98"
	Oct 26 08:30:24 embed-certs-752315 kubelet[1316]: I1026 08:30:24.420609    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2bad6af2-87f0-4874-957b-80da1acf3644-lib-modules\") pod \"kindnet-m4lzl\" (UID: \"2bad6af2-87f0-4874-957b-80da1acf3644\") " pod="kube-system/kindnet-m4lzl"
	Oct 26 08:30:24 embed-certs-752315 kubelet[1316]: I1026 08:30:24.420632    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8d092c78-0205-4b69-84bd-bb2b1ec33f17-lib-modules\") pod \"kube-proxy-5bf98\" (UID: \"8d092c78-0205-4b69-84bd-bb2b1ec33f17\") " pod="kube-system/kube-proxy-5bf98"
	Oct 26 08:30:24 embed-certs-752315 kubelet[1316]: I1026 08:30:24.420707    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2bad6af2-87f0-4874-957b-80da1acf3644-xtables-lock\") pod \"kindnet-m4lzl\" (UID: \"2bad6af2-87f0-4874-957b-80da1acf3644\") " pod="kube-system/kindnet-m4lzl"
	Oct 26 08:30:24 embed-certs-752315 kubelet[1316]: I1026 08:30:24.420747    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8d092c78-0205-4b69-84bd-bb2b1ec33f17-xtables-lock\") pod \"kube-proxy-5bf98\" (UID: \"8d092c78-0205-4b69-84bd-bb2b1ec33f17\") " pod="kube-system/kube-proxy-5bf98"
	Oct 26 08:30:24 embed-certs-752315 kubelet[1316]: I1026 08:30:24.420783    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf46b\" (UniqueName: \"kubernetes.io/projected/2bad6af2-87f0-4874-957b-80da1acf3644-kube-api-access-gf46b\") pod \"kindnet-m4lzl\" (UID: \"2bad6af2-87f0-4874-957b-80da1acf3644\") " pod="kube-system/kindnet-m4lzl"
	Oct 26 08:30:25 embed-certs-752315 kubelet[1316]: I1026 08:30:25.639800    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5bf98" podStartSLOduration=1.639781902 podStartE2EDuration="1.639781902s" podCreationTimestamp="2025-10-26 08:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:25.639578237 +0000 UTC m=+7.123473608" watchObservedRunningTime="2025-10-26 08:30:25.639781902 +0000 UTC m=+7.123677260"
	Oct 26 08:30:25 embed-certs-752315 kubelet[1316]: I1026 08:30:25.649765    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-m4lzl" podStartSLOduration=1.649741898 podStartE2EDuration="1.649741898s" podCreationTimestamp="2025-10-26 08:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:25.649670423 +0000 UTC m=+7.133565782" watchObservedRunningTime="2025-10-26 08:30:25.649741898 +0000 UTC m=+7.133637257"
	Oct 26 08:30:35 embed-certs-752315 kubelet[1316]: I1026 08:30:35.699719    1316 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 08:30:35 embed-certs-752315 kubelet[1316]: I1026 08:30:35.809509    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6dl\" (UniqueName: \"kubernetes.io/projected/0c8393f3-2b62-4bc8-b3cf-a43059d8cdee-kube-api-access-9d6dl\") pod \"storage-provisioner\" (UID: \"0c8393f3-2b62-4bc8-b3cf-a43059d8cdee\") " pod="kube-system/storage-provisioner"
	Oct 26 08:30:35 embed-certs-752315 kubelet[1316]: I1026 08:30:35.809557    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0c8393f3-2b62-4bc8-b3cf-a43059d8cdee-tmp\") pod \"storage-provisioner\" (UID: \"0c8393f3-2b62-4bc8-b3cf-a43059d8cdee\") " pod="kube-system/storage-provisioner"
	Oct 26 08:30:35 embed-certs-752315 kubelet[1316]: I1026 08:30:35.809583    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzhp8\" (UniqueName: \"kubernetes.io/projected/9a6b6a27-7914-4afa-9aee-3ef807310513-kube-api-access-wzhp8\") pod \"coredns-66bc5c9577-jktn8\" (UID: \"9a6b6a27-7914-4afa-9aee-3ef807310513\") " pod="kube-system/coredns-66bc5c9577-jktn8"
	Oct 26 08:30:35 embed-certs-752315 kubelet[1316]: I1026 08:30:35.809606    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9a6b6a27-7914-4afa-9aee-3ef807310513-config-volume\") pod \"coredns-66bc5c9577-jktn8\" (UID: \"9a6b6a27-7914-4afa-9aee-3ef807310513\") " pod="kube-system/coredns-66bc5c9577-jktn8"
	Oct 26 08:30:36 embed-certs-752315 kubelet[1316]: I1026 08:30:36.667756    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.667738278 podStartE2EDuration="11.667738278s" podCreationTimestamp="2025-10-26 08:30:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:36.667586804 +0000 UTC m=+18.151482162" watchObservedRunningTime="2025-10-26 08:30:36.667738278 +0000 UTC m=+18.151633636"
	Oct 26 08:30:38 embed-certs-752315 kubelet[1316]: I1026 08:30:38.591044    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jktn8" podStartSLOduration=14.591020116 podStartE2EDuration="14.591020116s" podCreationTimestamp="2025-10-26 08:30:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:30:36.678215319 +0000 UTC m=+18.162110672" watchObservedRunningTime="2025-10-26 08:30:38.591020116 +0000 UTC m=+20.074915475"
	Oct 26 08:30:38 embed-certs-752315 kubelet[1316]: I1026 08:30:38.625617    1316 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nz69\" (UniqueName: \"kubernetes.io/projected/5d90b3c9-8de3-47ec-b300-fda7d1a2dcf4-kube-api-access-4nz69\") pod \"busybox\" (UID: \"5d90b3c9-8de3-47ec-b300-fda7d1a2dcf4\") " pod="default/busybox"
	Oct 26 08:30:40 embed-certs-752315 kubelet[1316]: I1026 08:30:40.681844    1316 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.414763142 podStartE2EDuration="2.681823946s" podCreationTimestamp="2025-10-26 08:30:38 +0000 UTC" firstStartedPulling="2025-10-26 08:30:38.915656203 +0000 UTC m=+20.399551543" lastFinishedPulling="2025-10-26 08:30:40.182717006 +0000 UTC m=+21.666612347" observedRunningTime="2025-10-26 08:30:40.681576535 +0000 UTC m=+22.165471893" watchObservedRunningTime="2025-10-26 08:30:40.681823946 +0000 UTC m=+22.165719304"
	
	
	==> storage-provisioner [75a7b4f984851276ae6a8145e2ffba8eb6fa27846c573a6ad1ba6666d8e62a1d] <==
	I1026 08:30:36.085451       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 08:30:36.094819       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 08:30:36.094868       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 08:30:36.097157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:36.104042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:30:36.104203       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 08:30:36.104403       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-752315_e083e33d-0d59-47e1-9b86-196389f0a644!
	I1026 08:30:36.104414       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf86141f-07c1-4e09-9431-3b0349d6fa2c", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-752315_e083e33d-0d59-47e1-9b86-196389f0a644 became leader
	W1026 08:30:36.106822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:36.110797       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:30:36.205462       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-752315_e083e33d-0d59-47e1-9b86-196389f0a644!
	W1026 08:30:38.114312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:38.118301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:40.121881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:40.126438       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:42.130042       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:42.133645       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:44.136739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:44.142074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:46.144927       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:46.149020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:48.152121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:30:48.157377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-752315 -n embed-certs-752315
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-752315 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-810379 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-810379 --alsologtostderr -v=1: exit status 80 (2.291443844s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-810379 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:31:17.233138  261420 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:31:17.233388  261420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:17.233397  261420 out.go:374] Setting ErrFile to fd 2...
	I1026 08:31:17.233401  261420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:17.233598  261420 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:31:17.233846  261420 out.go:368] Setting JSON to false
	I1026 08:31:17.233890  261420 mustload.go:65] Loading cluster: old-k8s-version-810379
	I1026 08:31:17.234225  261420 config.go:182] Loaded profile config "old-k8s-version-810379": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1026 08:31:17.234621  261420 cli_runner.go:164] Run: docker container inspect old-k8s-version-810379 --format={{.State.Status}}
	I1026 08:31:17.253602  261420 host.go:66] Checking if "old-k8s-version-810379" exists ...
	I1026 08:31:17.253854  261420 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:31:17.321224  261420 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-26 08:31:17.309849886 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:31:17.322387  261420 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-810379 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 08:31:17.324794  261420 out.go:179] * Pausing node old-k8s-version-810379 ... 
	I1026 08:31:17.326290  261420 host.go:66] Checking if "old-k8s-version-810379" exists ...
	I1026 08:31:17.326583  261420 ssh_runner.go:195] Run: systemctl --version
	I1026 08:31:17.326647  261420 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-810379
	I1026 08:31:17.345561  261420 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33068 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/old-k8s-version-810379/id_rsa Username:docker}
	I1026 08:31:17.447403  261420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:17.461395  261420 pause.go:52] kubelet running: true
	I1026 08:31:17.461446  261420 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:31:17.639070  261420 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:31:17.639159  261420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:31:17.717159  261420 cri.go:89] found id: "a05f9bc7d851530f7dd8e58a8eb524b93587e00a90aab02a6c09492b0fb9b25c"
	I1026 08:31:17.717191  261420 cri.go:89] found id: "97a9356c65d4e3ca11e26338357b00da6fc7933cca8a4c49086bb3cb7e53e47a"
	I1026 08:31:17.717196  261420 cri.go:89] found id: "31e670af5aeb033581d00601263cb434e88c2e86d089070c53108a36f7201098"
	I1026 08:31:17.717202  261420 cri.go:89] found id: "f2c64b3865d37d91db310f0c9a0dbe53668aa164448d5e9153a8a479b8323cad"
	I1026 08:31:17.717206  261420 cri.go:89] found id: "ea4eca76c9673325cd454564d401e8f313d8b039a3881a24a985be812f2998d5"
	I1026 08:31:17.717211  261420 cri.go:89] found id: "05c780d0419bff37382e6fa31430690a2e55479d8bdba3e10b0e53207ce9c8ea"
	I1026 08:31:17.717216  261420 cri.go:89] found id: "91140716b117cb4eb2f3c6e149ff401f7197babd90f5e046ace64b14ed25aded"
	I1026 08:31:17.717220  261420 cri.go:89] found id: "8d811096167c839c4c04054b21e24c64ba17901168426c75d4408c4ce49c4503"
	I1026 08:31:17.717224  261420 cri.go:89] found id: "b4b1d14a54456f07311716e84e6ac70140f03e1a062261a56e0d6dd936819cec"
	I1026 08:31:17.717231  261420 cri.go:89] found id: "fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d"
	I1026 08:31:17.717235  261420 cri.go:89] found id: "8ba7298a29c40dfc8c6704be6dd32b968b23596f2b90249aad7a644173902fb5"
	I1026 08:31:17.717238  261420 cri.go:89] found id: ""
	I1026 08:31:17.717306  261420 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:31:17.730984  261420 retry.go:31] will retry after 291.418026ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:17Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:31:18.023537  261420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:18.037181  261420 pause.go:52] kubelet running: false
	I1026 08:31:18.037232  261420 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:31:18.187651  261420 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:31:18.187718  261420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:31:18.255565  261420 cri.go:89] found id: "a05f9bc7d851530f7dd8e58a8eb524b93587e00a90aab02a6c09492b0fb9b25c"
	I1026 08:31:18.255586  261420 cri.go:89] found id: "97a9356c65d4e3ca11e26338357b00da6fc7933cca8a4c49086bb3cb7e53e47a"
	I1026 08:31:18.255591  261420 cri.go:89] found id: "31e670af5aeb033581d00601263cb434e88c2e86d089070c53108a36f7201098"
	I1026 08:31:18.255595  261420 cri.go:89] found id: "f2c64b3865d37d91db310f0c9a0dbe53668aa164448d5e9153a8a479b8323cad"
	I1026 08:31:18.255599  261420 cri.go:89] found id: "ea4eca76c9673325cd454564d401e8f313d8b039a3881a24a985be812f2998d5"
	I1026 08:31:18.255604  261420 cri.go:89] found id: "05c780d0419bff37382e6fa31430690a2e55479d8bdba3e10b0e53207ce9c8ea"
	I1026 08:31:18.255607  261420 cri.go:89] found id: "91140716b117cb4eb2f3c6e149ff401f7197babd90f5e046ace64b14ed25aded"
	I1026 08:31:18.255610  261420 cri.go:89] found id: "8d811096167c839c4c04054b21e24c64ba17901168426c75d4408c4ce49c4503"
	I1026 08:31:18.255614  261420 cri.go:89] found id: "b4b1d14a54456f07311716e84e6ac70140f03e1a062261a56e0d6dd936819cec"
	I1026 08:31:18.255627  261420 cri.go:89] found id: "fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d"
	I1026 08:31:18.255632  261420 cri.go:89] found id: "8ba7298a29c40dfc8c6704be6dd32b968b23596f2b90249aad7a644173902fb5"
	I1026 08:31:18.255635  261420 cri.go:89] found id: ""
	I1026 08:31:18.255682  261420 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:31:18.267810  261420 retry.go:31] will retry after 379.975979ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:18Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:31:18.648454  261420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:18.661711  261420 pause.go:52] kubelet running: false
	I1026 08:31:18.661768  261420 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:31:18.808226  261420 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:31:18.808324  261420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:31:18.878049  261420 cri.go:89] found id: "a05f9bc7d851530f7dd8e58a8eb524b93587e00a90aab02a6c09492b0fb9b25c"
	I1026 08:31:18.878074  261420 cri.go:89] found id: "97a9356c65d4e3ca11e26338357b00da6fc7933cca8a4c49086bb3cb7e53e47a"
	I1026 08:31:18.878080  261420 cri.go:89] found id: "31e670af5aeb033581d00601263cb434e88c2e86d089070c53108a36f7201098"
	I1026 08:31:18.878085  261420 cri.go:89] found id: "f2c64b3865d37d91db310f0c9a0dbe53668aa164448d5e9153a8a479b8323cad"
	I1026 08:31:18.878089  261420 cri.go:89] found id: "ea4eca76c9673325cd454564d401e8f313d8b039a3881a24a985be812f2998d5"
	I1026 08:31:18.878093  261420 cri.go:89] found id: "05c780d0419bff37382e6fa31430690a2e55479d8bdba3e10b0e53207ce9c8ea"
	I1026 08:31:18.878097  261420 cri.go:89] found id: "91140716b117cb4eb2f3c6e149ff401f7197babd90f5e046ace64b14ed25aded"
	I1026 08:31:18.878102  261420 cri.go:89] found id: "8d811096167c839c4c04054b21e24c64ba17901168426c75d4408c4ce49c4503"
	I1026 08:31:18.878106  261420 cri.go:89] found id: "b4b1d14a54456f07311716e84e6ac70140f03e1a062261a56e0d6dd936819cec"
	I1026 08:31:18.878132  261420 cri.go:89] found id: "fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d"
	I1026 08:31:18.878140  261420 cri.go:89] found id: "8ba7298a29c40dfc8c6704be6dd32b968b23596f2b90249aad7a644173902fb5"
	I1026 08:31:18.878145  261420 cri.go:89] found id: ""
	I1026 08:31:18.878198  261420 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:31:18.891483  261420 retry.go:31] will retry after 305.044564ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:18Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:31:19.196970  261420 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:19.216669  261420 pause.go:52] kubelet running: false
	I1026 08:31:19.216826  261420 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:31:19.376639  261420 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:31:19.376711  261420 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:31:19.444604  261420 cri.go:89] found id: "a05f9bc7d851530f7dd8e58a8eb524b93587e00a90aab02a6c09492b0fb9b25c"
	I1026 08:31:19.444626  261420 cri.go:89] found id: "97a9356c65d4e3ca11e26338357b00da6fc7933cca8a4c49086bb3cb7e53e47a"
	I1026 08:31:19.444630  261420 cri.go:89] found id: "31e670af5aeb033581d00601263cb434e88c2e86d089070c53108a36f7201098"
	I1026 08:31:19.444633  261420 cri.go:89] found id: "f2c64b3865d37d91db310f0c9a0dbe53668aa164448d5e9153a8a479b8323cad"
	I1026 08:31:19.444636  261420 cri.go:89] found id: "ea4eca76c9673325cd454564d401e8f313d8b039a3881a24a985be812f2998d5"
	I1026 08:31:19.444639  261420 cri.go:89] found id: "05c780d0419bff37382e6fa31430690a2e55479d8bdba3e10b0e53207ce9c8ea"
	I1026 08:31:19.444643  261420 cri.go:89] found id: "91140716b117cb4eb2f3c6e149ff401f7197babd90f5e046ace64b14ed25aded"
	I1026 08:31:19.444647  261420 cri.go:89] found id: "8d811096167c839c4c04054b21e24c64ba17901168426c75d4408c4ce49c4503"
	I1026 08:31:19.444651  261420 cri.go:89] found id: "b4b1d14a54456f07311716e84e6ac70140f03e1a062261a56e0d6dd936819cec"
	I1026 08:31:19.444659  261420 cri.go:89] found id: "fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d"
	I1026 08:31:19.444663  261420 cri.go:89] found id: "8ba7298a29c40dfc8c6704be6dd32b968b23596f2b90249aad7a644173902fb5"
	I1026 08:31:19.444667  261420 cri.go:89] found id: ""
	I1026 08:31:19.444721  261420 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:31:19.458429  261420 out.go:203] 
	W1026 08:31:19.459571  261420 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:31:19.459599  261420 out.go:285] * 
	* 
	W1026 08:31:19.463523  261420 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:31:19.464571  261420 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-810379 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-810379
helpers_test.go:243: (dbg) docker inspect old-k8s-version-810379:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c",
	        "Created": "2025-10-26T08:29:09.042514733Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249701,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:30:23.583603168Z",
	            "FinishedAt": "2025-10-26T08:30:22.660960443Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c/hosts",
	        "LogPath": "/var/lib/docker/containers/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c-json.log",
	        "Name": "/old-k8s-version-810379",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-810379:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-810379",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c",
	                "LowerDir": "/var/lib/docker/overlay2/25870ec5365b41162d2a473a99dee21dda977cccb4c0d926dadb2870c0847e37-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25870ec5365b41162d2a473a99dee21dda977cccb4c0d926dadb2870c0847e37/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25870ec5365b41162d2a473a99dee21dda977cccb4c0d926dadb2870c0847e37/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25870ec5365b41162d2a473a99dee21dda977cccb4c0d926dadb2870c0847e37/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-810379",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-810379/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-810379",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-810379",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-810379",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "953375e7196951ec0716c8fa4b523e4e7b4c7e784936f550cd5e828bf3cc9937",
	            "SandboxKey": "/var/run/docker/netns/953375e71969",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-810379": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:3c:9a:64:eb:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19bd044ce129aeaf476dbf54add850f4fcc444c6e57c15a6d61eea854dbd9172",
	                    "EndpointID": "26dd318f7999c3e7ded5e6872f4d2d9e3838a16f11c82294b6ec550e64ebcc7b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-810379",
	                        "ccdf5b36aedf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-810379 -n old-k8s-version-810379
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-810379 -n old-k8s-version-810379: exit status 2 (337.88594ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-810379 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-810379 logs -n 25: (1.288202241s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ delete  │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p cert-expiration-535689 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-535689 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ ssh     │ -p NoKubernetes-815548 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p cert-expiration-535689                                                                                                                                                                                                                     │ cert-expiration-535689 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:30 UTC │
	│ stop    │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ ssh     │ -p NoKubernetes-815548 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-810379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p old-k8s-version-810379 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-810379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-001983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p no-preload-001983 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable metrics-server -p embed-certs-752315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p embed-certs-752315 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable dashboard -p no-preload-001983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-752315 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ image   │ old-k8s-version-810379 image list --format=json                                                                                                                                                                                               │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p old-k8s-version-810379 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:31:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:31:05.805242  258469 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:31:05.805416  258469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:05.805428  258469 out.go:374] Setting ErrFile to fd 2...
	I1026 08:31:05.805433  258469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:05.805734  258469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:31:05.806286  258469 out.go:368] Setting JSON to false
	I1026 08:31:05.807626  258469 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4417,"bootTime":1761463049,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:31:05.807741  258469 start.go:141] virtualization: kvm guest
	I1026 08:31:05.809911  258469 out.go:179] * [embed-certs-752315] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:31:05.811961  258469 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:31:05.812146  258469 notify.go:220] Checking for updates...
	I1026 08:31:05.815169  258469 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:31:05.817535  258469 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:31:05.821945  258469 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:31:05.823526  258469 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:31:05.825137  258469 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:31:05.827559  258469 config.go:182] Loaded profile config "embed-certs-752315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:05.828188  258469 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:31:05.855464  258469 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:31:05.855571  258469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:31:05.914562  258469 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:31:05.902620687 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:31:05.914668  258469 docker.go:318] overlay module found
	I1026 08:31:05.916550  258469 out.go:179] * Using the docker driver based on existing profile
	I1026 08:31:05.917797  258469 start.go:305] selected driver: docker
	I1026 08:31:05.917813  258469 start.go:925] validating driver "docker" against &{Name:embed-certs-752315 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-752315 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:31:05.917890  258469 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:31:05.918484  258469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:31:05.976198  258469 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:31:05.966373611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:31:05.976479  258469 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:31:05.976509  258469 cni.go:84] Creating CNI manager for ""
	I1026 08:31:05.976560  258469 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:31:05.976592  258469 start.go:349] cluster config:
	{Name:embed-certs-752315 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-752315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:31:05.979320  258469 out.go:179] * Starting "embed-certs-752315" primary control-plane node in "embed-certs-752315" cluster
	I1026 08:31:05.980516  258469 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:31:05.981982  258469 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:31:05.983375  258469 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:31:05.983408  258469 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:31:05.983429  258469 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:31:05.983444  258469 cache.go:58] Caching tarball of preloaded images
	I1026 08:31:05.983554  258469 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:31:05.983569  258469 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:31:05.983685  258469 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/config.json ...
	I1026 08:31:06.007074  258469 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:31:06.007099  258469 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:31:06.007114  258469 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:31:06.007148  258469 start.go:360] acquireMachinesLock for embed-certs-752315: {Name:mke5e92fe2bbc27b2e8ece3d6f167d2db37c8fc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:31:06.007214  258469 start.go:364] duration metric: took 43.528µs to acquireMachinesLock for "embed-certs-752315"
	I1026 08:31:06.007237  258469 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:31:06.007244  258469 fix.go:54] fixHost starting: 
	I1026 08:31:06.007638  258469 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:31:06.027721  258469 fix.go:112] recreateIfNeeded on embed-certs-752315: state=Stopped err=<nil>
	W1026 08:31:06.027754  258469 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:31:03.439904  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:31:03.440395  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:31:03.440443  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:31:03.440495  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:31:03.469040  204716 cri.go:89] found id: "d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:03.469059  204716 cri.go:89] found id: ""
	I1026 08:31:03.469067  204716 logs.go:282] 1 containers: [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a]
	I1026 08:31:03.469114  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:03.474135  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:31:03.474192  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:31:03.508636  204716 cri.go:89] found id: ""
	I1026 08:31:03.508662  204716 logs.go:282] 0 containers: []
	W1026 08:31:03.508670  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:31:03.508676  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:31:03.508725  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:31:03.540107  204716 cri.go:89] found id: ""
	I1026 08:31:03.540132  204716 logs.go:282] 0 containers: []
	W1026 08:31:03.540142  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:31:03.540149  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:31:03.540210  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:31:03.570207  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:03.570232  204716 cri.go:89] found id: ""
	I1026 08:31:03.570242  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:31:03.570327  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:03.574669  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:31:03.574733  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:31:03.600821  204716 cri.go:89] found id: ""
	I1026 08:31:03.600849  204716 logs.go:282] 0 containers: []
	W1026 08:31:03.600859  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:31:03.600865  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:31:03.600925  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:31:03.630046  204716 cri.go:89] found id: "c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:03.630078  204716 cri.go:89] found id: ""
	I1026 08:31:03.630087  204716 logs.go:282] 1 containers: [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d]
	I1026 08:31:03.630138  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:03.634284  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:31:03.634356  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:31:03.661460  204716 cri.go:89] found id: ""
	I1026 08:31:03.661486  204716 logs.go:282] 0 containers: []
	W1026 08:31:03.661497  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:31:03.661504  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:31:03.661564  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:31:03.687920  204716 cri.go:89] found id: ""
	I1026 08:31:03.687948  204716 logs.go:282] 0 containers: []
	W1026 08:31:03.687959  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:31:03.687969  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:31:03.687985  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:31:03.719654  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:31:03.719678  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:31:03.822895  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:31:03.822927  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:31:03.837751  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:31:03.837779  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:31:03.895660  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:31:03.895682  204716 logs.go:123] Gathering logs for kube-apiserver [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a] ...
	I1026 08:31:03.895699  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:03.932786  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:31:03.932821  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:03.997277  204716 logs.go:123] Gathering logs for kube-controller-manager [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d] ...
	I1026 08:31:03.997308  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:04.024432  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:31:04.024461  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:31:06.577738  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:31:06.578682  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:31:06.578740  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:31:06.578799  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:31:06.623062  204716 cri.go:89] found id: "d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:06.623095  204716 cri.go:89] found id: ""
	I1026 08:31:06.623105  204716 logs.go:282] 1 containers: [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a]
	I1026 08:31:06.623173  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:06.629127  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:31:06.629202  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:31:06.664670  204716 cri.go:89] found id: ""
	I1026 08:31:06.664703  204716 logs.go:282] 0 containers: []
	W1026 08:31:06.664714  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:31:06.664721  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:31:06.664775  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:31:06.705708  204716 cri.go:89] found id: ""
	I1026 08:31:06.705736  204716 logs.go:282] 0 containers: []
	W1026 08:31:06.705747  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:31:06.705755  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:31:06.705821  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:31:06.745521  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:06.745617  204716 cri.go:89] found id: ""
	I1026 08:31:06.745633  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:31:06.745685  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:06.751273  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:31:06.751342  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:31:02.941177  255419 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 08:31:02.945623  255419 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 08:31:02.946691  255419 api_server.go:141] control plane version: v1.34.1
	I1026 08:31:02.946712  255419 api_server.go:131] duration metric: took 1.006296161s to wait for apiserver health ...
	I1026 08:31:02.946720  255419 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:31:02.950182  255419 system_pods.go:59] 8 kube-system pods found
	I1026 08:31:02.950231  255419 system_pods.go:61] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:31:02.950260  255419 system_pods.go:61] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:31:02.950273  255419 system_pods.go:61] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:31:02.950283  255419 system_pods.go:61] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:31:02.950292  255419 system_pods.go:61] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:31:02.950303  255419 system_pods.go:61] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:31:02.950319  255419 system_pods.go:61] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:31:02.950328  255419 system_pods.go:61] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Running
	I1026 08:31:02.950335  255419 system_pods.go:74] duration metric: took 3.609576ms to wait for pod list to return data ...
	I1026 08:31:02.950347  255419 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:31:02.952826  255419 default_sa.go:45] found service account: "default"
	I1026 08:31:02.952846  255419 default_sa.go:55] duration metric: took 2.488921ms for default service account to be created ...
	I1026 08:31:02.952856  255419 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:31:02.955696  255419 system_pods.go:86] 8 kube-system pods found
	I1026 08:31:02.955728  255419 system_pods.go:89] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:31:02.955742  255419 system_pods.go:89] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:31:02.955753  255419 system_pods.go:89] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:31:02.955762  255419 system_pods.go:89] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:31:02.955770  255419 system_pods.go:89] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:31:02.955777  255419 system_pods.go:89] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:31:02.955785  255419 system_pods.go:89] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:31:02.955794  255419 system_pods.go:89] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Running
	I1026 08:31:02.955806  255419 system_pods.go:126] duration metric: took 2.943417ms to wait for k8s-apps to be running ...
	I1026 08:31:02.955818  255419 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:31:02.955867  255419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:02.968906  255419 system_svc.go:56] duration metric: took 13.078941ms WaitForService to wait for kubelet
	I1026 08:31:02.968938  255419 kubeadm.go:586] duration metric: took 3.499490317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:31:02.968958  255419 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:31:02.972195  255419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:31:02.972238  255419 node_conditions.go:123] node cpu capacity is 8
	I1026 08:31:02.972269  255419 node_conditions.go:105] duration metric: took 3.305073ms to run NodePressure ...
	I1026 08:31:02.972284  255419 start.go:241] waiting for startup goroutines ...
	I1026 08:31:02.972295  255419 start.go:246] waiting for cluster config update ...
	I1026 08:31:02.972308  255419 start.go:255] writing updated cluster config ...
	I1026 08:31:02.972635  255419 ssh_runner.go:195] Run: rm -f paused
	I1026 08:31:02.977319  255419 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:31:02.980807  255419 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p5nmq" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 08:31:04.985926  255419 pod_ready.go:104] pod "coredns-66bc5c9577-p5nmq" is not "Ready", error: <nil>
	W1026 08:31:06.987540  255419 pod_ready.go:104] pod "coredns-66bc5c9577-p5nmq" is not "Ready", error: <nil>
	I1026 08:31:06.029201  258469 out.go:252] * Restarting existing docker container for "embed-certs-752315" ...
	I1026 08:31:06.029300  258469 cli_runner.go:164] Run: docker start embed-certs-752315
	I1026 08:31:06.346576  258469 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:31:06.371753  258469 kic.go:430] container "embed-certs-752315" state is running.
	I1026 08:31:06.372675  258469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-752315
	I1026 08:31:06.397464  258469 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/config.json ...
	I1026 08:31:06.397824  258469 machine.go:93] provisionDockerMachine start ...
	I1026 08:31:06.397901  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:06.422854  258469 main.go:141] libmachine: Using SSH client type: native
	I1026 08:31:06.423234  258469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1026 08:31:06.423263  258469 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:31:06.424075  258469 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47670->127.0.0.1:33078: read: connection reset by peer
	I1026 08:31:09.580976  258469 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-752315
	
	I1026 08:31:09.581011  258469 ubuntu.go:182] provisioning hostname "embed-certs-752315"
	I1026 08:31:09.581072  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:09.603783  258469 main.go:141] libmachine: Using SSH client type: native
	I1026 08:31:09.604133  258469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1026 08:31:09.604155  258469 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-752315 && echo "embed-certs-752315" | sudo tee /etc/hostname
	I1026 08:31:09.775966  258469 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-752315
	
	I1026 08:31:09.776051  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:09.800007  258469 main.go:141] libmachine: Using SSH client type: native
	I1026 08:31:09.800332  258469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1026 08:31:09.800362  258469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-752315' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-752315/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-752315' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:31:09.959841  258469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:31:09.959872  258469 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:31:09.959893  258469 ubuntu.go:190] setting up certificates
	I1026 08:31:09.959903  258469 provision.go:84] configureAuth start
	I1026 08:31:09.959976  258469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-752315
	I1026 08:31:09.982498  258469 provision.go:143] copyHostCerts
	I1026 08:31:09.982605  258469 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:31:09.982628  258469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:31:09.982716  258469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:31:09.982862  258469 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:31:09.982877  258469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:31:09.982925  258469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:31:09.983272  258469 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:31:09.983287  258469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:31:09.983336  258469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:31:09.983436  258469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.embed-certs-752315 san=[127.0.0.1 192.168.103.2 embed-certs-752315 localhost minikube]
	I1026 08:31:10.490412  258469 provision.go:177] copyRemoteCerts
	I1026 08:31:10.490469  258469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:31:10.490512  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:10.515663  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:10.631192  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:31:10.657511  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 08:31:10.682582  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:31:10.708228  258469 provision.go:87] duration metric: took 748.30937ms to configureAuth
	I1026 08:31:10.708282  258469 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:31:10.708512  258469 config.go:182] Loaded profile config "embed-certs-752315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:10.708661  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:10.734238  258469 main.go:141] libmachine: Using SSH client type: native
	I1026 08:31:10.734552  258469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1026 08:31:10.734583  258469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:31:06.785668  204716 cri.go:89] found id: ""
	I1026 08:31:06.785690  204716 logs.go:282] 0 containers: []
	W1026 08:31:06.785698  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:31:06.785704  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:31:06.785753  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:31:06.816653  204716 cri.go:89] found id: "c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:06.816674  204716 cri.go:89] found id: ""
	I1026 08:31:06.816682  204716 logs.go:282] 1 containers: [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d]
	I1026 08:31:06.816737  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:06.820934  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:31:06.821005  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:31:06.849023  204716 cri.go:89] found id: ""
	I1026 08:31:06.849048  204716 logs.go:282] 0 containers: []
	W1026 08:31:06.849056  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:31:06.849062  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:31:06.849470  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:31:06.883530  204716 cri.go:89] found id: ""
	I1026 08:31:06.883557  204716 logs.go:282] 0 containers: []
	W1026 08:31:06.883577  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:31:06.883587  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:31:06.883631  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:31:07.019857  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:31:07.019890  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:31:07.038714  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:31:07.038746  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:31:07.114322  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:31:07.114350  204716 logs.go:123] Gathering logs for kube-apiserver [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a] ...
	I1026 08:31:07.114366  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:07.155462  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:31:07.155502  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:07.213712  204716 logs.go:123] Gathering logs for kube-controller-manager [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d] ...
	I1026 08:31:07.213746  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:07.242881  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:31:07.242904  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:31:07.299545  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:31:07.299578  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:31:09.838991  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:31:09.839443  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:31:09.839502  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:31:09.839556  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:31:09.876682  204716 cri.go:89] found id: "d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:09.876707  204716 cri.go:89] found id: ""
	I1026 08:31:09.876717  204716 logs.go:282] 1 containers: [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a]
	I1026 08:31:09.876775  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:09.881816  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:31:09.881891  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:31:09.917109  204716 cri.go:89] found id: ""
	I1026 08:31:09.917135  204716 logs.go:282] 0 containers: []
	W1026 08:31:09.917147  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:31:09.917155  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:31:09.917218  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:31:09.951211  204716 cri.go:89] found id: ""
	I1026 08:31:09.951239  204716 logs.go:282] 0 containers: []
	W1026 08:31:09.951316  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:31:09.951329  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:31:09.951404  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:31:09.990196  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:09.990219  204716 cri.go:89] found id: ""
	I1026 08:31:09.990229  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:31:09.990321  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:09.995707  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:31:09.995769  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:31:10.033394  204716 cri.go:89] found id: ""
	I1026 08:31:10.033418  204716 logs.go:282] 0 containers: []
	W1026 08:31:10.033427  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:31:10.033434  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:31:10.033490  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:31:10.070961  204716 cri.go:89] found id: "c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:10.070999  204716 cri.go:89] found id: ""
	I1026 08:31:10.071008  204716 logs.go:282] 1 containers: [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d]
	I1026 08:31:10.071073  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:10.075866  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:31:10.075937  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:31:10.112025  204716 cri.go:89] found id: ""
	I1026 08:31:10.112052  204716 logs.go:282] 0 containers: []
	W1026 08:31:10.112062  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:31:10.112069  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:31:10.112121  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:31:10.149215  204716 cri.go:89] found id: ""
	I1026 08:31:10.149241  204716 logs.go:282] 0 containers: []
	W1026 08:31:10.149274  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:31:10.149286  204716 logs.go:123] Gathering logs for kube-controller-manager [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d] ...
	I1026 08:31:10.149306  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:10.183486  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:31:10.183521  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:31:10.258122  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:31:10.258163  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:31:10.299394  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:31:10.299426  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:31:10.445199  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:31:10.445230  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:31:10.466321  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:31:10.466349  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:31:10.548993  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:31:10.549018  204716 logs.go:123] Gathering logs for kube-apiserver [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a] ...
	I1026 08:31:10.549033  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:10.599059  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:31:10.599100  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	W1026 08:31:09.487682  255419 pod_ready.go:104] pod "coredns-66bc5c9577-p5nmq" is not "Ready", error: <nil>
	W1026 08:31:11.986666  255419 pod_ready.go:104] pod "coredns-66bc5c9577-p5nmq" is not "Ready", error: <nil>
	I1026 08:31:11.689627  258469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:31:11.689657  258469 machine.go:96] duration metric: took 5.291813216s to provisionDockerMachine
	I1026 08:31:11.689671  258469 start.go:293] postStartSetup for "embed-certs-752315" (driver="docker")
	I1026 08:31:11.689684  258469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:31:11.689741  258469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:31:11.689810  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:11.711114  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:11.814836  258469 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:31:11.818782  258469 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:31:11.818809  258469 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:31:11.818822  258469 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:31:11.818881  258469 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:31:11.818984  258469 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:31:11.819126  258469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:31:11.827451  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:31:11.846886  258469 start.go:296] duration metric: took 157.199732ms for postStartSetup
	I1026 08:31:11.846961  258469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:31:11.847035  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:11.865929  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:11.969619  258469 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:31:11.974564  258469 fix.go:56] duration metric: took 5.967312408s for fixHost
	I1026 08:31:11.974600  258469 start.go:83] releasing machines lock for "embed-certs-752315", held for 5.967365908s
	I1026 08:31:11.974667  258469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-752315
	I1026 08:31:11.993896  258469 ssh_runner.go:195] Run: cat /version.json
	I1026 08:31:11.993960  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:11.993962  258469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:31:11.994009  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:12.013691  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:12.014560  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:12.165392  258469 ssh_runner.go:195] Run: systemctl --version
	I1026 08:31:12.172065  258469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:31:12.208051  258469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:31:12.213071  258469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:31:12.213135  258469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:31:12.221069  258469 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:31:12.221096  258469 start.go:495] detecting cgroup driver to use...
	I1026 08:31:12.221128  258469 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:31:12.221169  258469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:31:12.235270  258469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:31:12.248189  258469 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:31:12.248237  258469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:31:12.262552  258469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:31:12.275439  258469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:31:12.360531  258469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:31:12.445898  258469 docker.go:234] disabling docker service ...
	I1026 08:31:12.445949  258469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:31:12.460131  258469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:31:12.472733  258469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:31:12.558293  258469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:31:12.640157  258469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:31:12.652839  258469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:31:12.667183  258469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:31:12.667231  258469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.676543  258469 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:31:12.676614  258469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.685642  258469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.694564  258469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.704130  258469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:31:12.714059  258469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.725005  258469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.733854  258469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.742811  258469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:31:12.750153  258469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:31:12.758020  258469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:31:12.840024  258469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:31:12.954793  258469 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:31:12.954860  258469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:31:12.959240  258469 start.go:563] Will wait 60s for crictl version
	I1026 08:31:12.959344  258469 ssh_runner.go:195] Run: which crictl
	I1026 08:31:12.963040  258469 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:31:12.987047  258469 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:31:12.987146  258469 ssh_runner.go:195] Run: crio --version
	I1026 08:31:13.014910  258469 ssh_runner.go:195] Run: crio --version
	I1026 08:31:13.044788  258469 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:31:13.045993  258469 cli_runner.go:164] Run: docker network inspect embed-certs-752315 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:31:13.063539  258469 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1026 08:31:13.067988  258469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:31:13.079916  258469 kubeadm.go:883] updating cluster {Name:embed-certs-752315 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-752315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:31:13.080100  258469 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:31:13.080169  258469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:31:13.113332  258469 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:31:13.113356  258469 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:31:13.113403  258469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:31:13.139663  258469 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:31:13.139687  258469 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:31:13.139696  258469 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1026 08:31:13.139810  258469 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-752315 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-752315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:31:13.139884  258469 ssh_runner.go:195] Run: crio config
	I1026 08:31:13.186272  258469 cni.go:84] Creating CNI manager for ""
	I1026 08:31:13.186293  258469 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:31:13.186322  258469 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:31:13.186352  258469 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-752315 NodeName:embed-certs-752315 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:31:13.186535  258469 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-752315"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:31:13.186602  258469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:31:13.194688  258469 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:31:13.194780  258469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:31:13.203627  258469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1026 08:31:13.217643  258469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:31:13.230703  258469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1026 08:31:13.245733  258469 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:31:13.249569  258469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:31:13.260816  258469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:31:13.348811  258469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:31:13.383330  258469 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315 for IP: 192.168.103.2
	I1026 08:31:13.383357  258469 certs.go:195] generating shared ca certs ...
	I1026 08:31:13.383378  258469 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:13.383542  258469 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:31:13.383622  258469 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:31:13.383638  258469 certs.go:257] generating profile certs ...
	I1026 08:31:13.383750  258469 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/client.key
	I1026 08:31:13.383842  258469 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/apiserver.key.6ac45575
	I1026 08:31:13.383905  258469 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/proxy-client.key
	I1026 08:31:13.384074  258469 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:31:13.384117  258469 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:31:13.384130  258469 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:31:13.384162  258469 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:31:13.384196  258469 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:31:13.384227  258469 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:31:13.384311  258469 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:31:13.385078  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:31:13.407144  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:31:13.429439  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:31:13.450406  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:31:13.474677  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 08:31:13.497785  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:31:13.516650  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:31:13.535742  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 08:31:13.555894  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:31:13.575859  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:31:13.595202  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:31:13.613712  258469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:31:13.627219  258469 ssh_runner.go:195] Run: openssl version
	I1026 08:31:13.633313  258469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:31:13.644505  258469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:31:13.648652  258469 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:31:13.648715  258469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:31:13.685500  258469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:31:13.694110  258469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:31:13.704138  258469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:31:13.708490  258469 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:31:13.708547  258469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:31:13.756313  258469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:31:13.764989  258469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:31:13.774154  258469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:31:13.777980  258469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:31:13.778033  258469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:31:13.814815  258469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:31:13.823642  258469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:31:13.827879  258469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:31:13.864559  258469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:31:13.904863  258469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:31:13.950925  258469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:31:14.000498  258469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:31:14.057883  258469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:31:14.099685  258469 kubeadm.go:400] StartCluster: {Name:embed-certs-752315 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-752315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:31:14.099770  258469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:31:14.099819  258469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:31:14.134457  258469 cri.go:89] found id: "b4e2a3adae3b260f24bc34d1fbff56bfc90e781b00b3ef7ade7ad5a02580d3d2"
	I1026 08:31:14.134483  258469 cri.go:89] found id: "0aaa1f21f536e556e63c92670b92d8a3ea70dc7a114b8586e7c128c24f8010e2"
	I1026 08:31:14.134491  258469 cri.go:89] found id: "412f2a653f74cbf8314bc01c58e251aad5fd401f7370feb8ab90dacb1abcda0a"
	I1026 08:31:14.134497  258469 cri.go:89] found id: "53cccbff24b074724ed929ecf8bf44f382faed357e2e31b19207adb2df85cf66"
	I1026 08:31:14.134509  258469 cri.go:89] found id: ""
	I1026 08:31:14.134559  258469 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 08:31:14.146968  258469 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:14Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:31:14.147066  258469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:31:14.155620  258469 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:31:14.155642  258469 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:31:14.155687  258469 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:31:14.163947  258469 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:31:14.164861  258469 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-752315" does not appear in /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:31:14.165611  258469 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-9429/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-752315" cluster setting kubeconfig missing "embed-certs-752315" context setting]
	I1026 08:31:14.166654  258469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:14.168474  258469 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:31:14.179855  258469 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1026 08:31:14.179898  258469 kubeadm.go:601] duration metric: took 24.249521ms to restartPrimaryControlPlane
	I1026 08:31:14.179909  258469 kubeadm.go:402] duration metric: took 80.234805ms to StartCluster
	I1026 08:31:14.179926  258469 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:14.180000  258469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:31:14.182227  258469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:14.182511  258469 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:31:14.182731  258469 config.go:182] Loaded profile config "embed-certs-752315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:14.182780  258469 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:31:14.182870  258469 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-752315"
	I1026 08:31:14.182892  258469 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-752315"
	W1026 08:31:14.182899  258469 addons.go:247] addon storage-provisioner should already be in state true
	I1026 08:31:14.182925  258469 host.go:66] Checking if "embed-certs-752315" exists ...
	I1026 08:31:14.183443  258469 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:31:14.183537  258469 addons.go:69] Setting dashboard=true in profile "embed-certs-752315"
	I1026 08:31:14.183566  258469 addons.go:238] Setting addon dashboard=true in "embed-certs-752315"
	W1026 08:31:14.183575  258469 addons.go:247] addon dashboard should already be in state true
	I1026 08:31:14.183608  258469 host.go:66] Checking if "embed-certs-752315" exists ...
	I1026 08:31:14.183634  258469 addons.go:69] Setting default-storageclass=true in profile "embed-certs-752315"
	I1026 08:31:14.183655  258469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-752315"
	I1026 08:31:14.183932  258469 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:31:14.184081  258469 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:31:14.186657  258469 out.go:179] * Verifying Kubernetes components...
	I1026 08:31:14.188336  258469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:31:14.210539  258469 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:31:14.211738  258469 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:31:14.211773  258469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:31:14.211827  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:14.212844  258469 addons.go:238] Setting addon default-storageclass=true in "embed-certs-752315"
	W1026 08:31:14.212862  258469 addons.go:247] addon default-storageclass should already be in state true
	I1026 08:31:14.212888  258469 host.go:66] Checking if "embed-certs-752315" exists ...
	I1026 08:31:14.213357  258469 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:31:14.214707  258469 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 08:31:14.215844  258469 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 08:31:14.216878  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 08:31:14.216896  258469 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 08:31:14.216959  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:14.241222  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:14.242903  258469 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:31:14.242928  258469 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:31:14.243002  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:14.248358  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:14.274441  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:14.343526  258469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:31:14.357603  258469 node_ready.go:35] waiting up to 6m0s for node "embed-certs-752315" to be "Ready" ...
	I1026 08:31:14.371975  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 08:31:14.372002  258469 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 08:31:14.372228  258469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:31:14.387399  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 08:31:14.387425  258469 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 08:31:14.396324  258469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:31:14.405054  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 08:31:14.405090  258469 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 08:31:14.422329  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 08:31:14.422351  258469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 08:31:14.441228  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 08:31:14.441266  258469 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 08:31:14.459443  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 08:31:14.459471  258469 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 08:31:14.473287  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 08:31:14.473313  258469 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 08:31:14.488458  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 08:31:14.488482  258469 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 08:31:14.503998  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 08:31:14.504023  258469 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 08:31:14.517915  258469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 08:31:15.891031  258469 node_ready.go:49] node "embed-certs-752315" is "Ready"
	I1026 08:31:15.891068  258469 node_ready.go:38] duration metric: took 1.533436802s for node "embed-certs-752315" to be "Ready" ...
	I1026 08:31:15.891085  258469 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:31:15.891137  258469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:31:16.440432  258469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.068171271s)
	I1026 08:31:16.440492  258469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.044134258s)
	I1026 08:31:16.440595  258469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.922646888s)
	I1026 08:31:16.440643  258469 api_server.go:72] duration metric: took 2.258096796s to wait for apiserver process to appear ...
	I1026 08:31:16.440664  258469 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:31:16.440727  258469 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 08:31:16.442359  258469 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-752315 addons enable metrics-server
	
	I1026 08:31:16.447206  258469 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:31:16.447234  258469 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:31:16.453268  258469 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 08:31:13.180736  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:31:13.181229  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:31:13.181305  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:31:13.181377  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:31:13.210376  204716 cri.go:89] found id: "d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:13.210402  204716 cri.go:89] found id: ""
	I1026 08:31:13.210412  204716 logs.go:282] 1 containers: [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a]
	I1026 08:31:13.210470  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:13.214344  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:31:13.214400  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:31:13.241763  204716 cri.go:89] found id: ""
	I1026 08:31:13.241785  204716 logs.go:282] 0 containers: []
	W1026 08:31:13.241803  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:31:13.241809  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:31:13.241854  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:31:13.269564  204716 cri.go:89] found id: ""
	I1026 08:31:13.269589  204716 logs.go:282] 0 containers: []
	W1026 08:31:13.269596  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:31:13.269603  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:31:13.269659  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:31:13.301411  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:13.301436  204716 cri.go:89] found id: ""
	I1026 08:31:13.301445  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:31:13.301499  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:13.305521  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:31:13.305589  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:31:13.334027  204716 cri.go:89] found id: ""
	I1026 08:31:13.334054  204716 logs.go:282] 0 containers: []
	W1026 08:31:13.334063  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:31:13.334068  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:31:13.334165  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:31:13.362278  204716 cri.go:89] found id: "c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:13.362298  204716 cri.go:89] found id: ""
	I1026 08:31:13.362306  204716 logs.go:282] 1 containers: [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d]
	I1026 08:31:13.362365  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:13.366413  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:31:13.366474  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:31:13.397568  204716 cri.go:89] found id: ""
	I1026 08:31:13.397605  204716 logs.go:282] 0 containers: []
	W1026 08:31:13.397615  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:31:13.397622  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:31:13.397696  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:31:13.428742  204716 cri.go:89] found id: ""
	I1026 08:31:13.428769  204716 logs.go:282] 0 containers: []
	W1026 08:31:13.428780  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:31:13.428791  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:31:13.428806  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:31:13.500881  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:31:13.500900  204716 logs.go:123] Gathering logs for kube-apiserver [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a] ...
	I1026 08:31:13.500912  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:13.533972  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:31:13.534002  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:13.594834  204716 logs.go:123] Gathering logs for kube-controller-manager [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d] ...
	I1026 08:31:13.594876  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:13.620981  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:31:13.621026  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:31:13.671810  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:31:13.671843  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:31:13.705028  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:31:13.705063  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:31:13.813306  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:31:13.813336  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:31:16.329962  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:31:16.330520  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:31:16.330583  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:31:16.330654  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:31:16.362891  204716 cri.go:89] found id: "d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:16.362910  204716 cri.go:89] found id: ""
	I1026 08:31:16.362918  204716 logs.go:282] 1 containers: [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a]
	I1026 08:31:16.362964  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:16.367015  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:31:16.367090  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:31:16.397384  204716 cri.go:89] found id: ""
	I1026 08:31:16.397415  204716 logs.go:282] 0 containers: []
	W1026 08:31:16.397427  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:31:16.397435  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:31:16.397490  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:31:16.427189  204716 cri.go:89] found id: ""
	I1026 08:31:16.427216  204716 logs.go:282] 0 containers: []
	W1026 08:31:16.427233  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:31:16.427240  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:31:16.427324  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:31:16.457344  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:16.457359  204716 cri.go:89] found id: ""
	I1026 08:31:16.457372  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:31:16.457430  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:16.462847  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:31:16.462919  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:31:16.494130  204716 cri.go:89] found id: ""
	I1026 08:31:16.494157  204716 logs.go:282] 0 containers: []
	W1026 08:31:16.494168  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:31:16.494175  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:31:16.494236  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:31:16.527074  204716 cri.go:89] found id: "c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:16.527100  204716 cri.go:89] found id: ""
	I1026 08:31:16.527110  204716 logs.go:282] 1 containers: [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d]
	I1026 08:31:16.527169  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:16.532570  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:31:16.532630  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:31:16.565328  204716 cri.go:89] found id: ""
	I1026 08:31:16.565352  204716 logs.go:282] 0 containers: []
	W1026 08:31:16.565360  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:31:16.565365  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:31:16.565426  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:31:16.592471  204716 cri.go:89] found id: ""
	I1026 08:31:16.592500  204716 logs.go:282] 0 containers: []
	W1026 08:31:16.592510  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:31:16.592519  204716 logs.go:123] Gathering logs for kube-apiserver [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a] ...
	I1026 08:31:16.592531  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:16.628096  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:31:16.628136  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:16.682413  204716 logs.go:123] Gathering logs for kube-controller-manager [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d] ...
	I1026 08:31:16.682449  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:16.709799  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:31:16.709827  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:31:16.758903  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:31:16.758938  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1026 08:31:13.987643  255419 pod_ready.go:104] pod "coredns-66bc5c9577-p5nmq" is not "Ready", error: <nil>
	W1026 08:31:15.988474  255419 pod_ready.go:104] pod "coredns-66bc5c9577-p5nmq" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 26 08:30:51 old-k8s-version-810379 crio[563]: time="2025-10-26T08:30:51.121749Z" level=info msg="Created container 8ba7298a29c40dfc8c6704be6dd32b968b23596f2b90249aad7a644173902fb5: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7kfvh/kubernetes-dashboard" id=ec9eae44-7604-49f9-b896-b088c4db63a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:30:51 old-k8s-version-810379 crio[563]: time="2025-10-26T08:30:51.122359363Z" level=info msg="Starting container: 8ba7298a29c40dfc8c6704be6dd32b968b23596f2b90249aad7a644173902fb5" id=6bfd9d0f-2db2-4ca3-8491-b5a9734ed83f name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:30:51 old-k8s-version-810379 crio[563]: time="2025-10-26T08:30:51.124001818Z" level=info msg="Started container" PID=1731 containerID=8ba7298a29c40dfc8c6704be6dd32b968b23596f2b90249aad7a644173902fb5 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7kfvh/kubernetes-dashboard id=6bfd9d0f-2db2-4ca3-8491-b5a9734ed83f name=/runtime.v1.RuntimeService/StartContainer sandboxID=597d9b8123579b4a431a49d1015ca7b84edd6f2bfc1e15b15c7363c74bc7abf3
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.920104441Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=24c755b2-7aa9-4ee7-a9f7-dbbfe0e842a3 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.921033434Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=70bbe60f-2262-4000-b839-60d2e369bc7f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.92201144Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=570b4cf8-470e-492a-8877-cd7f30474091 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.922149302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.926781519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.926926039Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/89926590478c5943b0f042bf0cbe00f844fb32a97a19e13c9a41c8f466196a3e/merged/etc/passwd: no such file or directory"
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.926948998Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/89926590478c5943b0f042bf0cbe00f844fb32a97a19e13c9a41c8f466196a3e/merged/etc/group: no such file or directory"
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.9273309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.957623439Z" level=info msg="Created container a05f9bc7d851530f7dd8e58a8eb524b93587e00a90aab02a6c09492b0fb9b25c: kube-system/storage-provisioner/storage-provisioner" id=570b4cf8-470e-492a-8877-cd7f30474091 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.958304679Z" level=info msg="Starting container: a05f9bc7d851530f7dd8e58a8eb524b93587e00a90aab02a6c09492b0fb9b25c" id=70853703-a190-4231-b2e4-6458c48efbde name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.961592277Z" level=info msg="Started container" PID=1757 containerID=a05f9bc7d851530f7dd8e58a8eb524b93587e00a90aab02a6c09492b0fb9b25c description=kube-system/storage-provisioner/storage-provisioner id=70853703-a190-4231-b2e4-6458c48efbde name=/runtime.v1.RuntimeService/StartContainer sandboxID=cda4db2edaa1968e664d8aa120f28c7f4e23afae313da61c4ee6d4e049446ea9
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.804101497Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=98d33240-9c7f-4451-bc62-3f8440f25cfa name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.80506713Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=723ca4a6-f8c8-4305-958e-48d9023a2425 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.806063194Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl/dashboard-metrics-scraper" id=59e9ac13-7e98-4588-9f19-fa79cd98c773 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.806195865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.812789216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.813530655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.848134072Z" level=info msg="Created container fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl/dashboard-metrics-scraper" id=59e9ac13-7e98-4588-9f19-fa79cd98c773 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.848823628Z" level=info msg="Starting container: fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d" id=ff88478e-a9fe-472a-8f81-aee38036277e name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.851138539Z" level=info msg="Started container" PID=1788 containerID=fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl/dashboard-metrics-scraper id=ff88478e-a9fe-472a-8f81-aee38036277e name=/runtime.v1.RuntimeService/StartContainer sandboxID=e832e7a50aeb9f2619b125376c79e3e6deadddb7ebbe7eab5247f5c98f5612ae
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.938739514Z" level=info msg="Removing container: 45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d" id=89d1bae9-3965-4478-bbc9-6e7a462d22e9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.951317929Z" level=info msg="Removed container 45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl/dashboard-metrics-scraper" id=89d1bae9-3965-4478-bbc9-6e7a462d22e9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	fc59cd40c6251       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   2                   e832e7a50aeb9       dashboard-metrics-scraper-5f989dc9cf-l92pl       kubernetes-dashboard
	a05f9bc7d8515       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           16 seconds ago      Running             storage-provisioner         1                   cda4db2edaa19       storage-provisioner                              kube-system
	8ba7298a29c40       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   29 seconds ago      Running             kubernetes-dashboard        0                   597d9b8123579       kubernetes-dashboard-8694d4445c-7kfvh            kubernetes-dashboard
	97a9356c65d4e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           47 seconds ago      Running             coredns                     0                   e070d90916789       coredns-5dd5756b68-wrpqk                         kube-system
	2ec7dc5b7e012       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           47 seconds ago      Running             busybox                     1                   3e162c6c4f2cf       busybox                                          default
	31e670af5aeb0       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           47 seconds ago      Running             kube-proxy                  0                   2ad46201f3a03       kube-proxy-455nz                                 kube-system
	f2c64b3865d37       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           47 seconds ago      Running             kindnet-cni                 0                   b1b942a26efe0       kindnet-6mfc2                                    kube-system
	ea4eca76c9673       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           47 seconds ago      Exited              storage-provisioner         0                   cda4db2edaa19       storage-provisioner                              kube-system
	05c780d0419bf       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           50 seconds ago      Running             kube-controller-manager     0                   1042814f0e6b6       kube-controller-manager-old-k8s-version-810379   kube-system
	91140716b117c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           50 seconds ago      Running             kube-scheduler              0                   dbf9e2ba833da       kube-scheduler-old-k8s-version-810379            kube-system
	8d811096167c8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           50 seconds ago      Running             etcd                        0                   e3a95fee53b96       etcd-old-k8s-version-810379                      kube-system
	b4b1d14a54456       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           50 seconds ago      Running             kube-apiserver              0                   f38a7d22e2c72       kube-apiserver-old-k8s-version-810379            kube-system
	
	
	==> coredns [97a9356c65d4e3ca11e26338357b00da6fc7933cca8a4c49086bb3cb7e53e47a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40779 - 27509 "HINFO IN 1732453957897710394.6348622279188320067. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032397624s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-810379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-810379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=old-k8s-version-810379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_29_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-810379
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:31:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:31:03 +0000   Sun, 26 Oct 2025 08:29:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:31:03 +0000   Sun, 26 Oct 2025 08:29:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:31:03 +0000   Sun, 26 Oct 2025 08:29:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:31:03 +0000   Sun, 26 Oct 2025 08:29:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-810379
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d265c90b-90d2-4c31-9d3f-ae5ff5d718c0
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 coredns-5dd5756b68-wrpqk                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-old-k8s-version-810379                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-6mfc2                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      102s
	  kube-system                 kube-apiserver-old-k8s-version-810379             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-old-k8s-version-810379    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-455nz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kube-system                 kube-scheduler-old-k8s-version-810379             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-l92pl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-7kfvh             0 (0%)        0 (0%)      0 (0%)           0 (0%)         35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 101s               kube-proxy       
	  Normal  Starting                 47s                kube-proxy       
	  Normal  Starting                 2m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m)    kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m)    kubelet          Node old-k8s-version-810379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m)    kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node old-k8s-version-810379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     114s               kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s               node-controller  Node old-k8s-version-810379 event: Registered Node old-k8s-version-810379 in Controller
	  Normal  NodeReady                89s                kubelet          Node old-k8s-version-810379 status is now: NodeReady
	  Normal  Starting                 51s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  51s (x8 over 51s)  kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    51s (x8 over 51s)  kubelet          Node old-k8s-version-810379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     51s (x8 over 51s)  kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           35s                node-controller  Node old-k8s-version-810379 event: Registered Node old-k8s-version-810379 in Controller
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [8d811096167c839c4c04054b21e24c64ba17901168426c75d4408c4ce49c4503] <==
	{"level":"info","ts":"2025-10-26T08:30:30.377382Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-10-26T08:30:30.377622Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T08:30:30.377722Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T08:30:30.378829Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-26T08:30:30.378944Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-26T08:30:30.379044Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-26T08:30:30.379224Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T08:30:30.379326Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T08:30:31.668935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-26T08:30:31.668992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-26T08:30:31.669012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-10-26T08:30:31.669028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-10-26T08:30:31.669036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-10-26T08:30:31.669047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-10-26T08:30:31.669058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-10-26T08:30:31.670628Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-810379 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T08:30:31.670637Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T08:30:31.67066Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T08:30:31.670832Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T08:30:31.670864Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-26T08:30:31.67172Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-26T08:30:31.671822Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-10-26T08:30:52.055919Z","caller":"traceutil/trace.go:171","msg":"trace[1036694784] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"112.796718ms","start":"2025-10-26T08:30:51.943086Z","end":"2025-10-26T08:30:52.055883Z","steps":["trace[1036694784] 'process raft request'  (duration: 112.678044ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:30:52.055935Z","caller":"traceutil/trace.go:171","msg":"trace[650569965] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"114.046878ms","start":"2025-10-26T08:30:51.941869Z","end":"2025-10-26T08:30:52.055916Z","steps":["trace[650569965] 'process raft request'  (duration: 87.915391ms)","trace[650569965] 'compare'  (duration: 25.798373ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:30:52.055937Z","caller":"traceutil/trace.go:171","msg":"trace[1549737321] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"113.876947ms","start":"2025-10-26T08:30:51.942019Z","end":"2025-10-26T08:30:52.055896Z","steps":["trace[1549737321] 'process raft request'  (duration: 113.692049ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:31:20 up  1:13,  0 user,  load average: 3.60, 3.15, 2.01
	Linux old-k8s-version-810379 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f2c64b3865d37d91db310f0c9a0dbe53668aa164448d5e9153a8a479b8323cad] <==
	I1026 08:30:33.364192       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:30:33.453643       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1026 08:30:33.453798       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:30:33.453820       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:30:33.453841       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:30:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:30:33.656888       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:30:33.656926       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:30:33.656940       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:30:33.657081       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:30:34.057036       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:30:34.057062       1 metrics.go:72] Registering metrics
	I1026 08:30:34.057128       1 controller.go:711] "Syncing nftables rules"
	I1026 08:30:43.658952       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:30:43.659029       1 main.go:301] handling current node
	I1026 08:30:53.657886       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:30:53.657939       1 main.go:301] handling current node
	I1026 08:31:03.657153       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:31:03.657189       1 main.go:301] handling current node
	I1026 08:31:13.659503       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:31:13.659544       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b4b1d14a54456f07311716e84e6ac70140f03e1a062261a56e0d6dd936819cec] <==
	I1026 08:30:32.717534       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:30:32.749903       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 08:30:32.772708       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 08:30:32.772805       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 08:30:32.772912       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 08:30:32.772942       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 08:30:32.772961       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 08:30:32.773136       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 08:30:32.773198       1 aggregator.go:166] initial CRD sync complete...
	I1026 08:30:32.773215       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 08:30:32.773222       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 08:30:32.773229       1 cache.go:39] Caches are synced for autoregister controller
	E1026 08:30:32.778164       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 08:30:32.782773       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 08:30:33.675557       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:30:33.751142       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 08:30:33.792310       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 08:30:33.812870       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:30:33.824274       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:30:33.835630       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 08:30:33.904872       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.133.233"}
	I1026 08:30:33.923014       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.36.6"}
	I1026 08:30:45.613553       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 08:30:45.622985       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:30:45.701815       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [05c780d0419bff37382e6fa31430690a2e55479d8bdba3e10b0e53207ce9c8ea] <==
	I1026 08:30:45.718982       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.159676ms"
	I1026 08:30:45.721089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="15.779243ms"
	I1026 08:30:45.726971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.932362ms"
	I1026 08:30:45.727069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.003µs"
	I1026 08:30:45.728402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.265837ms"
	I1026 08:30:45.728477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="38.925µs"
	I1026 08:30:45.733009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.727µs"
	I1026 08:30:45.740693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.088µs"
	I1026 08:30:45.744717       1 shared_informer.go:318] Caches are synced for cronjob
	I1026 08:30:45.765121       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 08:30:45.794925       1 shared_informer.go:318] Caches are synced for stateful set
	I1026 08:30:45.801525       1 shared_informer.go:318] Caches are synced for disruption
	I1026 08:30:45.803892       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 08:30:46.174931       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 08:30:46.174961       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 08:30:46.184121       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 08:30:48.889509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.252µs"
	I1026 08:30:49.893207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="133.883µs"
	I1026 08:30:50.950588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.299µs"
	I1026 08:30:52.057593       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="117.929097ms"
	I1026 08:30:52.057823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.394µs"
	I1026 08:31:03.785310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.052548ms"
	I1026 08:31:03.785406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.385µs"
	I1026 08:31:08.950292       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.302µs"
	I1026 08:31:16.034173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.076µs"
	
	
	==> kube-proxy [31e670af5aeb033581d00601263cb434e88c2e86d089070c53108a36f7201098] <==
	I1026 08:30:33.249980       1 server_others.go:69] "Using iptables proxy"
	I1026 08:30:33.264714       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1026 08:30:33.292295       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:30:33.294901       1 server_others.go:152] "Using iptables Proxier"
	I1026 08:30:33.294940       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 08:30:33.294949       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 08:30:33.294989       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 08:30:33.295280       1 server.go:846] "Version info" version="v1.28.0"
	I1026 08:30:33.295346       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:30:33.296613       1 config.go:315] "Starting node config controller"
	I1026 08:30:33.296707       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 08:30:33.296881       1 config.go:97] "Starting endpoint slice config controller"
	I1026 08:30:33.296914       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 08:30:33.297078       1 config.go:188] "Starting service config controller"
	I1026 08:30:33.297228       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 08:30:33.396949       1 shared_informer.go:318] Caches are synced for node config
	I1026 08:30:33.397480       1 shared_informer.go:318] Caches are synced for service config
	I1026 08:30:33.397557       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [91140716b117cb4eb2f3c6e149ff401f7197babd90f5e046ace64b14ed25aded] <==
	I1026 08:30:31.057789       1 serving.go:348] Generated self-signed cert in-memory
	I1026 08:30:32.744422       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1026 08:30:32.746295       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:30:32.752748       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1026 08:30:32.752854       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1026 08:30:32.752875       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1026 08:30:32.752894       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 08:30:32.753344       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 08:30:32.753405       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 08:30:32.754034       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:30:32.754054       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 08:30:32.854455       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1026 08:30:32.854624       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 08:30:32.856106       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 26 08:30:45 old-k8s-version-810379 kubelet[720]: I1026 08:30:45.791818     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6b85d1f8-06ed-4998-bad2-19ba60a53a1f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-7kfvh\" (UID: \"6b85d1f8-06ed-4998-bad2-19ba60a53a1f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7kfvh"
	Oct 26 08:30:45 old-k8s-version-810379 kubelet[720]: I1026 08:30:45.791868     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5jgk\" (UniqueName: \"kubernetes.io/projected/6b85d1f8-06ed-4998-bad2-19ba60a53a1f-kube-api-access-d5jgk\") pod \"kubernetes-dashboard-8694d4445c-7kfvh\" (UID: \"6b85d1f8-06ed-4998-bad2-19ba60a53a1f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7kfvh"
	Oct 26 08:30:45 old-k8s-version-810379 kubelet[720]: I1026 08:30:45.791897     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1b7d4875-4cc0-430e-b814-d8c405201f19-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-l92pl\" (UID: \"1b7d4875-4cc0-430e-b814-d8c405201f19\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl"
	Oct 26 08:30:45 old-k8s-version-810379 kubelet[720]: I1026 08:30:45.791918     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dq7\" (UniqueName: \"kubernetes.io/projected/1b7d4875-4cc0-430e-b814-d8c405201f19-kube-api-access-j5dq7\") pod \"dashboard-metrics-scraper-5f989dc9cf-l92pl\" (UID: \"1b7d4875-4cc0-430e-b814-d8c405201f19\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl"
	Oct 26 08:30:48 old-k8s-version-810379 kubelet[720]: I1026 08:30:48.875324     720 scope.go:117] "RemoveContainer" containerID="3dce51d3ce60cb6e9dd7a6a7e9ba3721431364c56d8edfe1cb5b2be32c73a1ed"
	Oct 26 08:30:49 old-k8s-version-810379 kubelet[720]: I1026 08:30:49.879916     720 scope.go:117] "RemoveContainer" containerID="3dce51d3ce60cb6e9dd7a6a7e9ba3721431364c56d8edfe1cb5b2be32c73a1ed"
	Oct 26 08:30:49 old-k8s-version-810379 kubelet[720]: I1026 08:30:49.880099     720 scope.go:117] "RemoveContainer" containerID="45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d"
	Oct 26 08:30:49 old-k8s-version-810379 kubelet[720]: E1026 08:30:49.880467     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l92pl_kubernetes-dashboard(1b7d4875-4cc0-430e-b814-d8c405201f19)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl" podUID="1b7d4875-4cc0-430e-b814-d8c405201f19"
	Oct 26 08:30:50 old-k8s-version-810379 kubelet[720]: I1026 08:30:50.883644     720 scope.go:117] "RemoveContainer" containerID="45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d"
	Oct 26 08:30:50 old-k8s-version-810379 kubelet[720]: E1026 08:30:50.884069     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l92pl_kubernetes-dashboard(1b7d4875-4cc0-430e-b814-d8c405201f19)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl" podUID="1b7d4875-4cc0-430e-b814-d8c405201f19"
	Oct 26 08:30:51 old-k8s-version-810379 kubelet[720]: I1026 08:30:51.939752     720 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7kfvh" podStartSLOduration=1.8956233949999999 podCreationTimestamp="2025-10-26 08:30:45 +0000 UTC" firstStartedPulling="2025-10-26 08:30:46.044589724 +0000 UTC m=+16.334859118" lastFinishedPulling="2025-10-26 08:30:51.088654964 +0000 UTC m=+21.378924372" observedRunningTime="2025-10-26 08:30:51.939273554 +0000 UTC m=+22.229542965" watchObservedRunningTime="2025-10-26 08:30:51.939688649 +0000 UTC m=+22.229958060"
	Oct 26 08:30:56 old-k8s-version-810379 kubelet[720]: I1026 08:30:56.024220     720 scope.go:117] "RemoveContainer" containerID="45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d"
	Oct 26 08:30:56 old-k8s-version-810379 kubelet[720]: E1026 08:30:56.024697     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l92pl_kubernetes-dashboard(1b7d4875-4cc0-430e-b814-d8c405201f19)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl" podUID="1b7d4875-4cc0-430e-b814-d8c405201f19"
	Oct 26 08:31:03 old-k8s-version-810379 kubelet[720]: I1026 08:31:03.919613     720 scope.go:117] "RemoveContainer" containerID="ea4eca76c9673325cd454564d401e8f313d8b039a3881a24a985be812f2998d5"
	Oct 26 08:31:08 old-k8s-version-810379 kubelet[720]: I1026 08:31:08.803177     720 scope.go:117] "RemoveContainer" containerID="45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d"
	Oct 26 08:31:08 old-k8s-version-810379 kubelet[720]: I1026 08:31:08.937190     720 scope.go:117] "RemoveContainer" containerID="45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d"
	Oct 26 08:31:08 old-k8s-version-810379 kubelet[720]: I1026 08:31:08.937450     720 scope.go:117] "RemoveContainer" containerID="fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d"
	Oct 26 08:31:08 old-k8s-version-810379 kubelet[720]: E1026 08:31:08.937816     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l92pl_kubernetes-dashboard(1b7d4875-4cc0-430e-b814-d8c405201f19)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl" podUID="1b7d4875-4cc0-430e-b814-d8c405201f19"
	Oct 26 08:31:16 old-k8s-version-810379 kubelet[720]: I1026 08:31:16.023857     720 scope.go:117] "RemoveContainer" containerID="fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d"
	Oct 26 08:31:16 old-k8s-version-810379 kubelet[720]: E1026 08:31:16.024285     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l92pl_kubernetes-dashboard(1b7d4875-4cc0-430e-b814-d8c405201f19)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl" podUID="1b7d4875-4cc0-430e-b814-d8c405201f19"
	Oct 26 08:31:17 old-k8s-version-810379 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 08:31:17 old-k8s-version-810379 kubelet[720]: I1026 08:31:17.619906     720 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 26 08:31:17 old-k8s-version-810379 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 08:31:17 old-k8s-version-810379 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 08:31:17 old-k8s-version-810379 systemd[1]: kubelet.service: Consumed 1.463s CPU time.
	
	
	==> kubernetes-dashboard [8ba7298a29c40dfc8c6704be6dd32b968b23596f2b90249aad7a644173902fb5] <==
	2025/10/26 08:30:51 Starting overwatch
	2025/10/26 08:30:51 Using namespace: kubernetes-dashboard
	2025/10/26 08:30:51 Using in-cluster config to connect to apiserver
	2025/10/26 08:30:51 Using secret token for csrf signing
	2025/10/26 08:30:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 08:30:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 08:30:51 Successful initial request to the apiserver, version: v1.28.0
	2025/10/26 08:30:51 Generating JWE encryption key
	2025/10/26 08:30:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 08:30:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 08:30:51 Initializing JWE encryption key from synchronized object
	2025/10/26 08:30:51 Creating in-cluster Sidecar client
	2025/10/26 08:30:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:30:51 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [a05f9bc7d851530f7dd8e58a8eb524b93587e00a90aab02a6c09492b0fb9b25c] <==
	I1026 08:31:03.973483       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 08:31:03.980987       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 08:31:03.981038       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	
	
	==> storage-provisioner [ea4eca76c9673325cd454564d401e8f313d8b039a3881a24a985be812f2998d5] <==
	I1026 08:30:33.202085       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 08:31:03.204723       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-810379 -n old-k8s-version-810379
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-810379 -n old-k8s-version-810379: exit status 2 (393.348602ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-810379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-810379
helpers_test.go:243: (dbg) docker inspect old-k8s-version-810379:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c",
	        "Created": "2025-10-26T08:29:09.042514733Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 249701,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:30:23.583603168Z",
	            "FinishedAt": "2025-10-26T08:30:22.660960443Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c/hostname",
	        "HostsPath": "/var/lib/docker/containers/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c/hosts",
	        "LogPath": "/var/lib/docker/containers/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c/ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c-json.log",
	        "Name": "/old-k8s-version-810379",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-810379:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-810379",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ccdf5b36aedff1dff8ac82c9bbf83f5605b92faa879c1ab3ab6725e03e01780c",
	                "LowerDir": "/var/lib/docker/overlay2/25870ec5365b41162d2a473a99dee21dda977cccb4c0d926dadb2870c0847e37-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/25870ec5365b41162d2a473a99dee21dda977cccb4c0d926dadb2870c0847e37/merged",
	                "UpperDir": "/var/lib/docker/overlay2/25870ec5365b41162d2a473a99dee21dda977cccb4c0d926dadb2870c0847e37/diff",
	                "WorkDir": "/var/lib/docker/overlay2/25870ec5365b41162d2a473a99dee21dda977cccb4c0d926dadb2870c0847e37/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-810379",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-810379/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-810379",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-810379",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-810379",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "953375e7196951ec0716c8fa4b523e4e7b4c7e784936f550cd5e828bf3cc9937",
	            "SandboxKey": "/var/run/docker/netns/953375e71969",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33068"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33069"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33072"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33070"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33071"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-810379": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:3c:9a:64:eb:e9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19bd044ce129aeaf476dbf54add850f4fcc444c6e57c15a6d61eea854dbd9172",
	                    "EndpointID": "26dd318f7999c3e7ded5e6872f4d2d9e3838a16f11c82294b6ec550e64ebcc7b",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-810379",
	                        "ccdf5b36aedf"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-810379 -n old-k8s-version-810379
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-810379 -n old-k8s-version-810379: exit status 2 (398.647028ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-810379 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-810379 logs -n 25: (1.435433268s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ delete  │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                                                                                                                         │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p cert-expiration-535689 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-535689 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ ssh     │ -p NoKubernetes-815548 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p cert-expiration-535689                                                                                                                                                                                                                     │ cert-expiration-535689 │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:30 UTC │
	│ stop    │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p NoKubernetes-815548 --driver=docker  --container-runtime=crio                                                                                                                                                                              │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ ssh     │ -p NoKubernetes-815548 sudo systemctl is-active --quiet service kubelet                                                                                                                                                                       │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │                     │
	│ delete  │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548    │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-810379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p old-k8s-version-810379 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-810379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-001983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p no-preload-001983 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable metrics-server -p embed-certs-752315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p embed-certs-752315 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable dashboard -p no-preload-001983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-001983      │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-752315 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315     │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ image   │ old-k8s-version-810379 image list --format=json                                                                                                                                                                                               │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p old-k8s-version-810379 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-810379 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:31:05
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:31:05.805242  258469 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:31:05.805416  258469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:05.805428  258469 out.go:374] Setting ErrFile to fd 2...
	I1026 08:31:05.805433  258469 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:05.805734  258469 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:31:05.806286  258469 out.go:368] Setting JSON to false
	I1026 08:31:05.807626  258469 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4417,"bootTime":1761463049,"procs":340,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:31:05.807741  258469 start.go:141] virtualization: kvm guest
	I1026 08:31:05.809911  258469 out.go:179] * [embed-certs-752315] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:31:05.811961  258469 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:31:05.812146  258469 notify.go:220] Checking for updates...
	I1026 08:31:05.815169  258469 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:31:05.817535  258469 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:31:05.821945  258469 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:31:05.823526  258469 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:31:05.825137  258469 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:31:05.827559  258469 config.go:182] Loaded profile config "embed-certs-752315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:05.828188  258469 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:31:05.855464  258469 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:31:05.855571  258469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:31:05.914562  258469 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:31:05.902620687 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:31:05.914668  258469 docker.go:318] overlay module found
	I1026 08:31:05.916550  258469 out.go:179] * Using the docker driver based on existing profile
	I1026 08:31:05.917797  258469 start.go:305] selected driver: docker
	I1026 08:31:05.917813  258469 start.go:925] validating driver "docker" against &{Name:embed-certs-752315 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-752315 Namespace:default APIServerHAVIP: APIServerN
ame:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:31:05.917890  258469 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:31:05.918484  258469 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:31:05.976198  258469 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:68 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:31:05.966373611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:31:05.976479  258469 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:31:05.976509  258469 cni.go:84] Creating CNI manager for ""
	I1026 08:31:05.976560  258469 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:31:05.976592  258469 start.go:349] cluster config:
	{Name:embed-certs-752315 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-752315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fals
e DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:31:05.979320  258469 out.go:179] * Starting "embed-certs-752315" primary control-plane node in "embed-certs-752315" cluster
	I1026 08:31:05.980516  258469 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:31:05.981982  258469 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:31:05.983375  258469 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:31:05.983408  258469 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:31:05.983429  258469 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:31:05.983444  258469 cache.go:58] Caching tarball of preloaded images
	I1026 08:31:05.983554  258469 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:31:05.983569  258469 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:31:05.983685  258469 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/config.json ...
	I1026 08:31:06.007074  258469 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:31:06.007099  258469 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:31:06.007114  258469 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:31:06.007148  258469 start.go:360] acquireMachinesLock for embed-certs-752315: {Name:mke5e92fe2bbc27b2e8ece3d6f167d2db37c8fc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:31:06.007214  258469 start.go:364] duration metric: took 43.528µs to acquireMachinesLock for "embed-certs-752315"
	I1026 08:31:06.007237  258469 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:31:06.007244  258469 fix.go:54] fixHost starting: 
	I1026 08:31:06.007638  258469 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:31:06.027721  258469 fix.go:112] recreateIfNeeded on embed-certs-752315: state=Stopped err=<nil>
	W1026 08:31:06.027754  258469 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:31:03.439904  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:31:03.440395  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:31:03.440443  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:31:03.440495  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:31:03.469040  204716 cri.go:89] found id: "d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:03.469059  204716 cri.go:89] found id: ""
	I1026 08:31:03.469067  204716 logs.go:282] 1 containers: [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a]
	I1026 08:31:03.469114  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:03.474135  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:31:03.474192  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:31:03.508636  204716 cri.go:89] found id: ""
	I1026 08:31:03.508662  204716 logs.go:282] 0 containers: []
	W1026 08:31:03.508670  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:31:03.508676  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:31:03.508725  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:31:03.540107  204716 cri.go:89] found id: ""
	I1026 08:31:03.540132  204716 logs.go:282] 0 containers: []
	W1026 08:31:03.540142  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:31:03.540149  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:31:03.540210  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:31:03.570207  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:03.570232  204716 cri.go:89] found id: ""
	I1026 08:31:03.570242  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:31:03.570327  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:03.574669  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:31:03.574733  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:31:03.600821  204716 cri.go:89] found id: ""
	I1026 08:31:03.600849  204716 logs.go:282] 0 containers: []
	W1026 08:31:03.600859  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:31:03.600865  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:31:03.600925  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:31:03.630046  204716 cri.go:89] found id: "c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:03.630078  204716 cri.go:89] found id: ""
	I1026 08:31:03.630087  204716 logs.go:282] 1 containers: [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d]
	I1026 08:31:03.630138  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:03.634284  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:31:03.634356  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:31:03.661460  204716 cri.go:89] found id: ""
	I1026 08:31:03.661486  204716 logs.go:282] 0 containers: []
	W1026 08:31:03.661497  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:31:03.661504  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:31:03.661564  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:31:03.687920  204716 cri.go:89] found id: ""
	I1026 08:31:03.687948  204716 logs.go:282] 0 containers: []
	W1026 08:31:03.687959  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:31:03.687969  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:31:03.687985  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:31:03.719654  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:31:03.719678  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:31:03.822895  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:31:03.822927  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:31:03.837751  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:31:03.837779  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:31:03.895660  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:31:03.895682  204716 logs.go:123] Gathering logs for kube-apiserver [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a] ...
	I1026 08:31:03.895699  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:03.932786  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:31:03.932821  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:03.997277  204716 logs.go:123] Gathering logs for kube-controller-manager [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d] ...
	I1026 08:31:03.997308  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:04.024432  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:31:04.024461  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:31:06.577738  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:31:06.578682  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:31:06.578740  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:31:06.578799  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:31:06.623062  204716 cri.go:89] found id: "d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:06.623095  204716 cri.go:89] found id: ""
	I1026 08:31:06.623105  204716 logs.go:282] 1 containers: [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a]
	I1026 08:31:06.623173  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:06.629127  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:31:06.629202  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:31:06.664670  204716 cri.go:89] found id: ""
	I1026 08:31:06.664703  204716 logs.go:282] 0 containers: []
	W1026 08:31:06.664714  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:31:06.664721  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:31:06.664775  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:31:06.705708  204716 cri.go:89] found id: ""
	I1026 08:31:06.705736  204716 logs.go:282] 0 containers: []
	W1026 08:31:06.705747  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:31:06.705755  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:31:06.705821  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:31:06.745521  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:06.745617  204716 cri.go:89] found id: ""
	I1026 08:31:06.745633  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:31:06.745685  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:06.751273  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:31:06.751342  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:31:02.941177  255419 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 08:31:02.945623  255419 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 08:31:02.946691  255419 api_server.go:141] control plane version: v1.34.1
	I1026 08:31:02.946712  255419 api_server.go:131] duration metric: took 1.006296161s to wait for apiserver health ...
	I1026 08:31:02.946720  255419 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:31:02.950182  255419 system_pods.go:59] 8 kube-system pods found
	I1026 08:31:02.950231  255419 system_pods.go:61] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:31:02.950260  255419 system_pods.go:61] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:31:02.950273  255419 system_pods.go:61] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:31:02.950283  255419 system_pods.go:61] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:31:02.950292  255419 system_pods.go:61] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:31:02.950303  255419 system_pods.go:61] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:31:02.950319  255419 system_pods.go:61] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:31:02.950328  255419 system_pods.go:61] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Running
	I1026 08:31:02.950335  255419 system_pods.go:74] duration metric: took 3.609576ms to wait for pod list to return data ...
	I1026 08:31:02.950347  255419 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:31:02.952826  255419 default_sa.go:45] found service account: "default"
	I1026 08:31:02.952846  255419 default_sa.go:55] duration metric: took 2.488921ms for default service account to be created ...
	I1026 08:31:02.952856  255419 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:31:02.955696  255419 system_pods.go:86] 8 kube-system pods found
	I1026 08:31:02.955728  255419 system_pods.go:89] "coredns-66bc5c9577-p5nmq" [9ab93365-e465-4f64-aed0-d44be160f82d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:31:02.955742  255419 system_pods.go:89] "etcd-no-preload-001983" [90bf4691-e737-48b8-a410-836e5961cfab] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:31:02.955753  255419 system_pods.go:89] "kindnet-8lrm6" [8f793c9d-8d06-4fd2-a937-fe2736ff2c5a] Running
	I1026 08:31:02.955762  255419 system_pods.go:89] "kube-apiserver-no-preload-001983" [aadc8b6d-28d3-400b-9e0c-227420fad773] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:31:02.955770  255419 system_pods.go:89] "kube-controller-manager-no-preload-001983" [936f9efe-d5d6-4101-8416-9e2b68319f1f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:31:02.955777  255419 system_pods.go:89] "kube-proxy-xpz59" [0c7993ca-1a79-4128-8863-3a16d46c0f8d] Running
	I1026 08:31:02.955785  255419 system_pods.go:89] "kube-scheduler-no-preload-001983" [b800ef5f-5c23-40d1-9149-38991e979864] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:31:02.955794  255419 system_pods.go:89] "storage-provisioner" [23d54628-ab9a-49f0-bd02-fdf50b08c93e] Running
	I1026 08:31:02.955806  255419 system_pods.go:126] duration metric: took 2.943417ms to wait for k8s-apps to be running ...
	I1026 08:31:02.955818  255419 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:31:02.955867  255419 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:02.968906  255419 system_svc.go:56] duration metric: took 13.078941ms WaitForService to wait for kubelet
	I1026 08:31:02.968938  255419 kubeadm.go:586] duration metric: took 3.499490317s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:31:02.968958  255419 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:31:02.972195  255419 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:31:02.972238  255419 node_conditions.go:123] node cpu capacity is 8
	I1026 08:31:02.972269  255419 node_conditions.go:105] duration metric: took 3.305073ms to run NodePressure ...
	I1026 08:31:02.972284  255419 start.go:241] waiting for startup goroutines ...
	I1026 08:31:02.972295  255419 start.go:246] waiting for cluster config update ...
	I1026 08:31:02.972308  255419 start.go:255] writing updated cluster config ...
	I1026 08:31:02.972635  255419 ssh_runner.go:195] Run: rm -f paused
	I1026 08:31:02.977319  255419 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:31:02.980807  255419 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p5nmq" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 08:31:04.985926  255419 pod_ready.go:104] pod "coredns-66bc5c9577-p5nmq" is not "Ready", error: <nil>
	W1026 08:31:06.987540  255419 pod_ready.go:104] pod "coredns-66bc5c9577-p5nmq" is not "Ready", error: <nil>
	I1026 08:31:06.029201  258469 out.go:252] * Restarting existing docker container for "embed-certs-752315" ...
	I1026 08:31:06.029300  258469 cli_runner.go:164] Run: docker start embed-certs-752315
	I1026 08:31:06.346576  258469 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:31:06.371753  258469 kic.go:430] container "embed-certs-752315" state is running.
	I1026 08:31:06.372675  258469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-752315
	I1026 08:31:06.397464  258469 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/config.json ...
	I1026 08:31:06.397824  258469 machine.go:93] provisionDockerMachine start ...
	I1026 08:31:06.397901  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:06.422854  258469 main.go:141] libmachine: Using SSH client type: native
	I1026 08:31:06.423234  258469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1026 08:31:06.423263  258469 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:31:06.424075  258469 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:47670->127.0.0.1:33078: read: connection reset by peer
	I1026 08:31:09.580976  258469 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-752315
	
	I1026 08:31:09.581011  258469 ubuntu.go:182] provisioning hostname "embed-certs-752315"
	I1026 08:31:09.581072  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:09.603783  258469 main.go:141] libmachine: Using SSH client type: native
	I1026 08:31:09.604133  258469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1026 08:31:09.604155  258469 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-752315 && echo "embed-certs-752315" | sudo tee /etc/hostname
	I1026 08:31:09.775966  258469 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-752315
	
	I1026 08:31:09.776051  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:09.800007  258469 main.go:141] libmachine: Using SSH client type: native
	I1026 08:31:09.800332  258469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1026 08:31:09.800362  258469 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-752315' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-752315/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-752315' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:31:09.959841  258469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:31:09.959872  258469 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:31:09.959893  258469 ubuntu.go:190] setting up certificates
	I1026 08:31:09.959903  258469 provision.go:84] configureAuth start
	I1026 08:31:09.959976  258469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-752315
	I1026 08:31:09.982498  258469 provision.go:143] copyHostCerts
	I1026 08:31:09.982605  258469 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:31:09.982628  258469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:31:09.982716  258469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:31:09.982862  258469 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:31:09.982877  258469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:31:09.982925  258469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:31:09.983272  258469 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:31:09.983287  258469 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:31:09.983336  258469 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:31:09.983436  258469 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.embed-certs-752315 san=[127.0.0.1 192.168.103.2 embed-certs-752315 localhost minikube]
	I1026 08:31:10.490412  258469 provision.go:177] copyRemoteCerts
	I1026 08:31:10.490469  258469 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:31:10.490512  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:10.515663  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:10.631192  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:31:10.657511  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 08:31:10.682582  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:31:10.708228  258469 provision.go:87] duration metric: took 748.30937ms to configureAuth
	I1026 08:31:10.708282  258469 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:31:10.708512  258469 config.go:182] Loaded profile config "embed-certs-752315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:10.708661  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:10.734238  258469 main.go:141] libmachine: Using SSH client type: native
	I1026 08:31:10.734552  258469 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33078 <nil> <nil>}
	I1026 08:31:10.734583  258469 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:31:06.785668  204716 cri.go:89] found id: ""
	I1026 08:31:06.785690  204716 logs.go:282] 0 containers: []
	W1026 08:31:06.785698  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:31:06.785704  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:31:06.785753  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:31:06.816653  204716 cri.go:89] found id: "c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:06.816674  204716 cri.go:89] found id: ""
	I1026 08:31:06.816682  204716 logs.go:282] 1 containers: [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d]
	I1026 08:31:06.816737  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:06.820934  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:31:06.821005  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:31:06.849023  204716 cri.go:89] found id: ""
	I1026 08:31:06.849048  204716 logs.go:282] 0 containers: []
	W1026 08:31:06.849056  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:31:06.849062  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:31:06.849470  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:31:06.883530  204716 cri.go:89] found id: ""
	I1026 08:31:06.883557  204716 logs.go:282] 0 containers: []
	W1026 08:31:06.883577  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:31:06.883587  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:31:06.883631  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:31:07.019857  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:31:07.019890  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:31:07.038714  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:31:07.038746  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:31:07.114322  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:31:07.114350  204716 logs.go:123] Gathering logs for kube-apiserver [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a] ...
	I1026 08:31:07.114366  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:07.155462  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:31:07.155502  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:07.213712  204716 logs.go:123] Gathering logs for kube-controller-manager [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d] ...
	I1026 08:31:07.213746  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:07.242881  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:31:07.242904  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:31:07.299545  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:31:07.299578  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:31:09.838991  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:31:09.839443  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:31:09.839502  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:31:09.839556  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:31:09.876682  204716 cri.go:89] found id: "d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:09.876707  204716 cri.go:89] found id: ""
	I1026 08:31:09.876717  204716 logs.go:282] 1 containers: [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a]
	I1026 08:31:09.876775  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:09.881816  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:31:09.881891  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:31:09.917109  204716 cri.go:89] found id: ""
	I1026 08:31:09.917135  204716 logs.go:282] 0 containers: []
	W1026 08:31:09.917147  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:31:09.917155  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:31:09.917218  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:31:09.951211  204716 cri.go:89] found id: ""
	I1026 08:31:09.951239  204716 logs.go:282] 0 containers: []
	W1026 08:31:09.951316  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:31:09.951329  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:31:09.951404  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:31:09.990196  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:09.990219  204716 cri.go:89] found id: ""
	I1026 08:31:09.990229  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:31:09.990321  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:09.995707  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:31:09.995769  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:31:10.033394  204716 cri.go:89] found id: ""
	I1026 08:31:10.033418  204716 logs.go:282] 0 containers: []
	W1026 08:31:10.033427  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:31:10.033434  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:31:10.033490  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:31:10.070961  204716 cri.go:89] found id: "c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:10.070999  204716 cri.go:89] found id: ""
	I1026 08:31:10.071008  204716 logs.go:282] 1 containers: [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d]
	I1026 08:31:10.071073  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:10.075866  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:31:10.075937  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:31:10.112025  204716 cri.go:89] found id: ""
	I1026 08:31:10.112052  204716 logs.go:282] 0 containers: []
	W1026 08:31:10.112062  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:31:10.112069  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:31:10.112121  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:31:10.149215  204716 cri.go:89] found id: ""
	I1026 08:31:10.149241  204716 logs.go:282] 0 containers: []
	W1026 08:31:10.149274  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:31:10.149286  204716 logs.go:123] Gathering logs for kube-controller-manager [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d] ...
	I1026 08:31:10.149306  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:10.183486  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:31:10.183521  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:31:10.258122  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:31:10.258163  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:31:10.299394  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:31:10.299426  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:31:10.445199  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:31:10.445230  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:31:10.466321  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:31:10.466349  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:31:10.548993  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:31:10.549018  204716 logs.go:123] Gathering logs for kube-apiserver [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a] ...
	I1026 08:31:10.549033  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:10.599059  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:31:10.599100  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	W1026 08:31:09.487682  255419 pod_ready.go:104] pod "coredns-66bc5c9577-p5nmq" is not "Ready", error: <nil>
	W1026 08:31:11.986666  255419 pod_ready.go:104] pod "coredns-66bc5c9577-p5nmq" is not "Ready", error: <nil>
	I1026 08:31:11.689627  258469 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:31:11.689657  258469 machine.go:96] duration metric: took 5.291813216s to provisionDockerMachine
	I1026 08:31:11.689671  258469 start.go:293] postStartSetup for "embed-certs-752315" (driver="docker")
	I1026 08:31:11.689684  258469 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:31:11.689741  258469 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:31:11.689810  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:11.711114  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:11.814836  258469 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:31:11.818782  258469 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:31:11.818809  258469 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:31:11.818822  258469 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:31:11.818881  258469 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:31:11.818984  258469 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:31:11.819126  258469 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:31:11.827451  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:31:11.846886  258469 start.go:296] duration metric: took 157.199732ms for postStartSetup
	I1026 08:31:11.846961  258469 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:31:11.847035  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:11.865929  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:11.969619  258469 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:31:11.974564  258469 fix.go:56] duration metric: took 5.967312408s for fixHost
	I1026 08:31:11.974600  258469 start.go:83] releasing machines lock for "embed-certs-752315", held for 5.967365908s
	I1026 08:31:11.974667  258469 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-752315
	I1026 08:31:11.993896  258469 ssh_runner.go:195] Run: cat /version.json
	I1026 08:31:11.993960  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:11.993962  258469 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:31:11.994009  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:12.013691  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:12.014560  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:12.165392  258469 ssh_runner.go:195] Run: systemctl --version
	I1026 08:31:12.172065  258469 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:31:12.208051  258469 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:31:12.213071  258469 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:31:12.213135  258469 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:31:12.221069  258469 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:31:12.221096  258469 start.go:495] detecting cgroup driver to use...
	I1026 08:31:12.221128  258469 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:31:12.221169  258469 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:31:12.235270  258469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:31:12.248189  258469 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:31:12.248237  258469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:31:12.262552  258469 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:31:12.275439  258469 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:31:12.360531  258469 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:31:12.445898  258469 docker.go:234] disabling docker service ...
	I1026 08:31:12.445949  258469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:31:12.460131  258469 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:31:12.472733  258469 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:31:12.558293  258469 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:31:12.640157  258469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:31:12.652839  258469 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:31:12.667183  258469 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:31:12.667231  258469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.676543  258469 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:31:12.676614  258469 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.685642  258469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.694564  258469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.704130  258469 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:31:12.714059  258469 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.725005  258469 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.733854  258469 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:12.742811  258469 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:31:12.750153  258469 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:31:12.758020  258469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:31:12.840024  258469 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:31:12.954793  258469 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:31:12.954860  258469 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:31:12.959240  258469 start.go:563] Will wait 60s for crictl version
	I1026 08:31:12.959344  258469 ssh_runner.go:195] Run: which crictl
	I1026 08:31:12.963040  258469 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:31:12.987047  258469 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:31:12.987146  258469 ssh_runner.go:195] Run: crio --version
	I1026 08:31:13.014910  258469 ssh_runner.go:195] Run: crio --version
	I1026 08:31:13.044788  258469 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:31:13.045993  258469 cli_runner.go:164] Run: docker network inspect embed-certs-752315 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:31:13.063539  258469 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1026 08:31:13.067988  258469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:31:13.079916  258469 kubeadm.go:883] updating cluster {Name:embed-certs-752315 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-752315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:dock
er BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:31:13.080100  258469 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:31:13.080169  258469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:31:13.113332  258469 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:31:13.113356  258469 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:31:13.113403  258469 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:31:13.139663  258469 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:31:13.139687  258469 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:31:13.139696  258469 kubeadm.go:934] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1026 08:31:13.139810  258469 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-752315 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-752315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:31:13.139884  258469 ssh_runner.go:195] Run: crio config
	I1026 08:31:13.186272  258469 cni.go:84] Creating CNI manager for ""
	I1026 08:31:13.186293  258469 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:31:13.186322  258469 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:31:13.186352  258469 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-752315 NodeName:embed-certs-752315 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:31:13.186535  258469 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-752315"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:31:13.186602  258469 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:31:13.194688  258469 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:31:13.194780  258469 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:31:13.203627  258469 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (369 bytes)
	I1026 08:31:13.217643  258469 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:31:13.230703  258469 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1026 08:31:13.245733  258469 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:31:13.249569  258469 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:31:13.260816  258469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:31:13.348811  258469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:31:13.383330  258469 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315 for IP: 192.168.103.2
	I1026 08:31:13.383357  258469 certs.go:195] generating shared ca certs ...
	I1026 08:31:13.383378  258469 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:13.383542  258469 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:31:13.383622  258469 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:31:13.383638  258469 certs.go:257] generating profile certs ...
	I1026 08:31:13.383750  258469 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/client.key
	I1026 08:31:13.383842  258469 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/apiserver.key.6ac45575
	I1026 08:31:13.383905  258469 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/proxy-client.key
	I1026 08:31:13.384074  258469 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:31:13.384117  258469 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:31:13.384130  258469 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:31:13.384162  258469 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:31:13.384196  258469 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:31:13.384227  258469 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:31:13.384311  258469 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:31:13.385078  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:31:13.407144  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:31:13.429439  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:31:13.450406  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:31:13.474677  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1026 08:31:13.497785  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:31:13.516650  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:31:13.535742  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/embed-certs-752315/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 08:31:13.555894  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:31:13.575859  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:31:13.595202  258469 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:31:13.613712  258469 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:31:13.627219  258469 ssh_runner.go:195] Run: openssl version
	I1026 08:31:13.633313  258469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:31:13.644505  258469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:31:13.648652  258469 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:31:13.648715  258469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:31:13.685500  258469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:31:13.694110  258469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:31:13.704138  258469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:31:13.708490  258469 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:31:13.708547  258469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:31:13.756313  258469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:31:13.764989  258469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:31:13.774154  258469 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:31:13.777980  258469 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:31:13.778033  258469 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:31:13.814815  258469 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:31:13.823642  258469 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:31:13.827879  258469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:31:13.864559  258469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:31:13.904863  258469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:31:13.950925  258469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:31:14.000498  258469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:31:14.057883  258469 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:31:14.099685  258469 kubeadm.go:400] StartCluster: {Name:embed-certs-752315 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-752315 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker
BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:31:14.099770  258469 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:31:14.099819  258469 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:31:14.134457  258469 cri.go:89] found id: "b4e2a3adae3b260f24bc34d1fbff56bfc90e781b00b3ef7ade7ad5a02580d3d2"
	I1026 08:31:14.134483  258469 cri.go:89] found id: "0aaa1f21f536e556e63c92670b92d8a3ea70dc7a114b8586e7c128c24f8010e2"
	I1026 08:31:14.134491  258469 cri.go:89] found id: "412f2a653f74cbf8314bc01c58e251aad5fd401f7370feb8ab90dacb1abcda0a"
	I1026 08:31:14.134497  258469 cri.go:89] found id: "53cccbff24b074724ed929ecf8bf44f382faed357e2e31b19207adb2df85cf66"
	I1026 08:31:14.134509  258469 cri.go:89] found id: ""
	I1026 08:31:14.134559  258469 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 08:31:14.146968  258469 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:14Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:31:14.147066  258469 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:31:14.155620  258469 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:31:14.155642  258469 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:31:14.155687  258469 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:31:14.163947  258469 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:31:14.164861  258469 kubeconfig.go:47] verify endpoint returned: get endpoint: "embed-certs-752315" does not appear in /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:31:14.165611  258469 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-9429/kubeconfig needs updating (will repair): [kubeconfig missing "embed-certs-752315" cluster setting kubeconfig missing "embed-certs-752315" context setting]
	I1026 08:31:14.166654  258469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:14.168474  258469 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:31:14.179855  258469 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.103.2
	I1026 08:31:14.179898  258469 kubeadm.go:601] duration metric: took 24.249521ms to restartPrimaryControlPlane
	I1026 08:31:14.179909  258469 kubeadm.go:402] duration metric: took 80.234805ms to StartCluster
	I1026 08:31:14.179926  258469 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:14.180000  258469 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:31:14.182227  258469 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:14.182511  258469 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:31:14.182731  258469 config.go:182] Loaded profile config "embed-certs-752315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:14.182780  258469 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:31:14.182870  258469 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-752315"
	I1026 08:31:14.182892  258469 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-752315"
	W1026 08:31:14.182899  258469 addons.go:247] addon storage-provisioner should already be in state true
	I1026 08:31:14.182925  258469 host.go:66] Checking if "embed-certs-752315" exists ...
	I1026 08:31:14.183443  258469 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:31:14.183537  258469 addons.go:69] Setting dashboard=true in profile "embed-certs-752315"
	I1026 08:31:14.183566  258469 addons.go:238] Setting addon dashboard=true in "embed-certs-752315"
	W1026 08:31:14.183575  258469 addons.go:247] addon dashboard should already be in state true
	I1026 08:31:14.183608  258469 host.go:66] Checking if "embed-certs-752315" exists ...
	I1026 08:31:14.183634  258469 addons.go:69] Setting default-storageclass=true in profile "embed-certs-752315"
	I1026 08:31:14.183655  258469 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-752315"
	I1026 08:31:14.183932  258469 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:31:14.184081  258469 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:31:14.186657  258469 out.go:179] * Verifying Kubernetes components...
	I1026 08:31:14.188336  258469 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:31:14.210539  258469 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:31:14.211738  258469 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:31:14.211773  258469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:31:14.211827  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:14.212844  258469 addons.go:238] Setting addon default-storageclass=true in "embed-certs-752315"
	W1026 08:31:14.212862  258469 addons.go:247] addon default-storageclass should already be in state true
	I1026 08:31:14.212888  258469 host.go:66] Checking if "embed-certs-752315" exists ...
	I1026 08:31:14.213357  258469 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:31:14.214707  258469 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 08:31:14.215844  258469 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 08:31:14.216878  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 08:31:14.216896  258469 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 08:31:14.216959  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:14.241222  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:14.242903  258469 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:31:14.242928  258469 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:31:14.243002  258469 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:31:14.248358  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:14.274441  258469 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:31:14.343526  258469 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:31:14.357603  258469 node_ready.go:35] waiting up to 6m0s for node "embed-certs-752315" to be "Ready" ...
	I1026 08:31:14.371975  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 08:31:14.372002  258469 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 08:31:14.372228  258469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:31:14.387399  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 08:31:14.387425  258469 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 08:31:14.396324  258469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:31:14.405054  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 08:31:14.405090  258469 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 08:31:14.422329  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 08:31:14.422351  258469 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 08:31:14.441228  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 08:31:14.441266  258469 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 08:31:14.459443  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 08:31:14.459471  258469 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 08:31:14.473287  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 08:31:14.473313  258469 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 08:31:14.488458  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 08:31:14.488482  258469 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 08:31:14.503998  258469 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 08:31:14.504023  258469 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 08:31:14.517915  258469 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 08:31:15.891031  258469 node_ready.go:49] node "embed-certs-752315" is "Ready"
	I1026 08:31:15.891068  258469 node_ready.go:38] duration metric: took 1.533436802s for node "embed-certs-752315" to be "Ready" ...
	I1026 08:31:15.891085  258469 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:31:15.891137  258469 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:31:16.440432  258469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.068171271s)
	I1026 08:31:16.440492  258469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.044134258s)
	I1026 08:31:16.440595  258469 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.922646888s)
	I1026 08:31:16.440643  258469 api_server.go:72] duration metric: took 2.258096796s to wait for apiserver process to appear ...
	I1026 08:31:16.440664  258469 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:31:16.440727  258469 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 08:31:16.442359  258469 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-752315 addons enable metrics-server
	
	I1026 08:31:16.447206  258469 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:31:16.447234  258469 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:31:16.453268  258469 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1026 08:31:13.180736  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:31:13.181229  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:31:13.181305  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:31:13.181377  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:31:13.210376  204716 cri.go:89] found id: "d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:13.210402  204716 cri.go:89] found id: ""
	I1026 08:31:13.210412  204716 logs.go:282] 1 containers: [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a]
	I1026 08:31:13.210470  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:13.214344  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:31:13.214400  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:31:13.241763  204716 cri.go:89] found id: ""
	I1026 08:31:13.241785  204716 logs.go:282] 0 containers: []
	W1026 08:31:13.241803  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:31:13.241809  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:31:13.241854  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:31:13.269564  204716 cri.go:89] found id: ""
	I1026 08:31:13.269589  204716 logs.go:282] 0 containers: []
	W1026 08:31:13.269596  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:31:13.269603  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:31:13.269659  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:31:13.301411  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:13.301436  204716 cri.go:89] found id: ""
	I1026 08:31:13.301445  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:31:13.301499  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:13.305521  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:31:13.305589  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:31:13.334027  204716 cri.go:89] found id: ""
	I1026 08:31:13.334054  204716 logs.go:282] 0 containers: []
	W1026 08:31:13.334063  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:31:13.334068  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:31:13.334165  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:31:13.362278  204716 cri.go:89] found id: "c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:13.362298  204716 cri.go:89] found id: ""
	I1026 08:31:13.362306  204716 logs.go:282] 1 containers: [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d]
	I1026 08:31:13.362365  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:13.366413  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:31:13.366474  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:31:13.397568  204716 cri.go:89] found id: ""
	I1026 08:31:13.397605  204716 logs.go:282] 0 containers: []
	W1026 08:31:13.397615  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:31:13.397622  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:31:13.397696  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:31:13.428742  204716 cri.go:89] found id: ""
	I1026 08:31:13.428769  204716 logs.go:282] 0 containers: []
	W1026 08:31:13.428780  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:31:13.428791  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:31:13.428806  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:31:13.500881  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:31:13.500900  204716 logs.go:123] Gathering logs for kube-apiserver [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a] ...
	I1026 08:31:13.500912  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:13.533972  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:31:13.534002  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:13.594834  204716 logs.go:123] Gathering logs for kube-controller-manager [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d] ...
	I1026 08:31:13.594876  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:13.620981  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:31:13.621026  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:31:13.671810  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:31:13.671843  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:31:13.705028  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:31:13.705063  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:31:13.813306  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:31:13.813336  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:31:16.329962  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:31:16.330520  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:31:16.330583  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:31:16.330654  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:31:16.362891  204716 cri.go:89] found id: "d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:16.362910  204716 cri.go:89] found id: ""
	I1026 08:31:16.362918  204716 logs.go:282] 1 containers: [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a]
	I1026 08:31:16.362964  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:16.367015  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:31:16.367090  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:31:16.397384  204716 cri.go:89] found id: ""
	I1026 08:31:16.397415  204716 logs.go:282] 0 containers: []
	W1026 08:31:16.397427  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:31:16.397435  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:31:16.397490  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:31:16.427189  204716 cri.go:89] found id: ""
	I1026 08:31:16.427216  204716 logs.go:282] 0 containers: []
	W1026 08:31:16.427233  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:31:16.427240  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:31:16.427324  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:31:16.457344  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:16.457359  204716 cri.go:89] found id: ""
	I1026 08:31:16.457372  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:31:16.457430  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:16.462847  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:31:16.462919  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:31:16.494130  204716 cri.go:89] found id: ""
	I1026 08:31:16.494157  204716 logs.go:282] 0 containers: []
	W1026 08:31:16.494168  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:31:16.494175  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:31:16.494236  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:31:16.527074  204716 cri.go:89] found id: "c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:16.527100  204716 cri.go:89] found id: ""
	I1026 08:31:16.527110  204716 logs.go:282] 1 containers: [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d]
	I1026 08:31:16.527169  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:16.532570  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:31:16.532630  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:31:16.565328  204716 cri.go:89] found id: ""
	I1026 08:31:16.565352  204716 logs.go:282] 0 containers: []
	W1026 08:31:16.565360  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:31:16.565365  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:31:16.565426  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:31:16.592471  204716 cri.go:89] found id: ""
	I1026 08:31:16.592500  204716 logs.go:282] 0 containers: []
	W1026 08:31:16.592510  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:31:16.592519  204716 logs.go:123] Gathering logs for kube-apiserver [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a] ...
	I1026 08:31:16.592531  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:16.628096  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:31:16.628136  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:16.682413  204716 logs.go:123] Gathering logs for kube-controller-manager [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d] ...
	I1026 08:31:16.682449  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:16.709799  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:31:16.709827  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:31:16.758903  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:31:16.758938  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1026 08:31:13.987643  255419 pod_ready.go:104] pod "coredns-66bc5c9577-p5nmq" is not "Ready", error: <nil>
	W1026 08:31:15.988474  255419 pod_ready.go:104] pod "coredns-66bc5c9577-p5nmq" is not "Ready", error: <nil>
	I1026 08:31:16.454651  258469 addons.go:514] duration metric: took 2.271871118s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1026 08:31:16.941661  258469 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 08:31:16.947182  258469 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:31:16.947212  258469 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:31:17.440839  258469 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 08:31:17.445348  258469 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 08:31:17.446341  258469 api_server.go:141] control plane version: v1.34.1
	I1026 08:31:17.446366  258469 api_server.go:131] duration metric: took 1.005649612s to wait for apiserver health ...
	I1026 08:31:17.446376  258469 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:31:17.449925  258469 system_pods.go:59] 8 kube-system pods found
	I1026 08:31:17.449961  258469 system_pods.go:61] "coredns-66bc5c9577-jktn8" [9a6b6a27-7914-4afa-9aee-3ef807310513] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:31:17.449973  258469 system_pods.go:61] "etcd-embed-certs-752315" [d7872377-f318-41fc-aee4-c7fb1fc11cf8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:31:17.449987  258469 system_pods.go:61] "kindnet-m4lzl" [2bad6af2-87f0-4874-957b-80da1acf3644] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 08:31:17.450000  258469 system_pods.go:61] "kube-apiserver-embed-certs-752315" [6e127291-4127-4650-b294-a2b0c23d5589] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:31:17.450009  258469 system_pods.go:61] "kube-controller-manager-embed-certs-752315" [4522e23b-e101-4ca6-9e2b-294764e7a1ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:31:17.450022  258469 system_pods.go:61] "kube-proxy-5bf98" [8d092c78-0205-4b69-84bd-bb2b1ec33f17] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 08:31:17.450036  258469 system_pods.go:61] "kube-scheduler-embed-certs-752315" [d6a8357c-f4a8-4402-818a-1035ad27ccf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:31:17.450048  258469 system_pods.go:61] "storage-provisioner" [0c8393f3-2b62-4bc8-b3cf-a43059d8cdee] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:31:17.450057  258469 system_pods.go:74] duration metric: took 3.67247ms to wait for pod list to return data ...
	I1026 08:31:17.450076  258469 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:31:17.452127  258469 default_sa.go:45] found service account: "default"
	I1026 08:31:17.452147  258469 default_sa.go:55] duration metric: took 2.061957ms for default service account to be created ...
	I1026 08:31:17.452168  258469 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:31:17.454750  258469 system_pods.go:86] 8 kube-system pods found
	I1026 08:31:17.454783  258469 system_pods.go:89] "coredns-66bc5c9577-jktn8" [9a6b6a27-7914-4afa-9aee-3ef807310513] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:31:17.454795  258469 system_pods.go:89] "etcd-embed-certs-752315" [d7872377-f318-41fc-aee4-c7fb1fc11cf8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:31:17.454809  258469 system_pods.go:89] "kindnet-m4lzl" [2bad6af2-87f0-4874-957b-80da1acf3644] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 08:31:17.454831  258469 system_pods.go:89] "kube-apiserver-embed-certs-752315" [6e127291-4127-4650-b294-a2b0c23d5589] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:31:17.454839  258469 system_pods.go:89] "kube-controller-manager-embed-certs-752315" [4522e23b-e101-4ca6-9e2b-294764e7a1ec] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:31:17.454852  258469 system_pods.go:89] "kube-proxy-5bf98" [8d092c78-0205-4b69-84bd-bb2b1ec33f17] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 08:31:17.454864  258469 system_pods.go:89] "kube-scheduler-embed-certs-752315" [d6a8357c-f4a8-4402-818a-1035ad27ccf8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:31:17.454878  258469 system_pods.go:89] "storage-provisioner" [0c8393f3-2b62-4bc8-b3cf-a43059d8cdee] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:31:17.454886  258469 system_pods.go:126] duration metric: took 2.712195ms to wait for k8s-apps to be running ...
	I1026 08:31:17.454898  258469 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:31:17.454945  258469 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:17.467847  258469 system_svc.go:56] duration metric: took 12.940628ms WaitForService to wait for kubelet
	I1026 08:31:17.467877  258469 kubeadm.go:586] duration metric: took 3.285333037s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:31:17.467899  258469 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:31:17.470681  258469 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:31:17.470705  258469 node_conditions.go:123] node cpu capacity is 8
	I1026 08:31:17.470722  258469 node_conditions.go:105] duration metric: took 2.817538ms to run NodePressure ...
	I1026 08:31:17.470736  258469 start.go:241] waiting for startup goroutines ...
	I1026 08:31:17.470750  258469 start.go:246] waiting for cluster config update ...
	I1026 08:31:17.470766  258469 start.go:255] writing updated cluster config ...
	I1026 08:31:17.471062  258469 ssh_runner.go:195] Run: rm -f paused
	I1026 08:31:17.474703  258469 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:31:17.478265  258469 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-jktn8" in "kube-system" namespace to be "Ready" or be gone ...
	W1026 08:31:19.484557  258469 pod_ready.go:104] pod "coredns-66bc5c9577-jktn8" is not "Ready", error: <nil>
	I1026 08:31:16.797643  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:31:16.797685  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:31:16.943175  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:31:16.943210  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:31:16.961586  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:31:16.961619  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:31:17.035898  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:31:19.537450  204716 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:31:19.537928  204716 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1026 08:31:19.537990  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1026 08:31:19.538055  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1026 08:31:19.567756  204716 cri.go:89] found id: "d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	I1026 08:31:19.567775  204716 cri.go:89] found id: ""
	I1026 08:31:19.567783  204716 logs.go:282] 1 containers: [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a]
	I1026 08:31:19.567836  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:19.572198  204716 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1026 08:31:19.572285  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1026 08:31:19.601694  204716 cri.go:89] found id: ""
	I1026 08:31:19.601724  204716 logs.go:282] 0 containers: []
	W1026 08:31:19.601735  204716 logs.go:284] No container was found matching "etcd"
	I1026 08:31:19.601742  204716 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1026 08:31:19.601801  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1026 08:31:19.631850  204716 cri.go:89] found id: ""
	I1026 08:31:19.631877  204716 logs.go:282] 0 containers: []
	W1026 08:31:19.631889  204716 logs.go:284] No container was found matching "coredns"
	I1026 08:31:19.631896  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1026 08:31:19.631953  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1026 08:31:19.658853  204716 cri.go:89] found id: "a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:19.658874  204716 cri.go:89] found id: ""
	I1026 08:31:19.658882  204716 logs.go:282] 1 containers: [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c]
	I1026 08:31:19.658940  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:19.663459  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1026 08:31:19.663511  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1026 08:31:19.691519  204716 cri.go:89] found id: ""
	I1026 08:31:19.691541  204716 logs.go:282] 0 containers: []
	W1026 08:31:19.691549  204716 logs.go:284] No container was found matching "kube-proxy"
	I1026 08:31:19.691554  204716 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1026 08:31:19.691610  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1026 08:31:19.719623  204716 cri.go:89] found id: "c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:19.719647  204716 cri.go:89] found id: ""
	I1026 08:31:19.719657  204716 logs.go:282] 1 containers: [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d]
	I1026 08:31:19.719712  204716 ssh_runner.go:195] Run: which crictl
	I1026 08:31:19.723686  204716 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1026 08:31:19.723755  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1026 08:31:19.758197  204716 cri.go:89] found id: ""
	I1026 08:31:19.758219  204716 logs.go:282] 0 containers: []
	W1026 08:31:19.758227  204716 logs.go:284] No container was found matching "kindnet"
	I1026 08:31:19.758233  204716 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1026 08:31:19.758315  204716 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1026 08:31:19.791790  204716 cri.go:89] found id: ""
	I1026 08:31:19.791818  204716 logs.go:282] 0 containers: []
	W1026 08:31:19.791830  204716 logs.go:284] No container was found matching "storage-provisioner"
	I1026 08:31:19.791846  204716 logs.go:123] Gathering logs for kube-scheduler [a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c] ...
	I1026 08:31:19.791863  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 a255d470f0a257750cc4034feed1feb4caab7079d006b5b5fc9adf9ff422962c"
	I1026 08:31:19.850492  204716 logs.go:123] Gathering logs for kube-controller-manager [c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d] ...
	I1026 08:31:19.850523  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 c52ba8a8207662d8481a792af0ad09ede4be645220627712e638b4312f0aa90d"
	I1026 08:31:19.882508  204716 logs.go:123] Gathering logs for CRI-O ...
	I1026 08:31:19.882546  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1026 08:31:19.953210  204716 logs.go:123] Gathering logs for container status ...
	I1026 08:31:19.953242  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1026 08:31:19.991403  204716 logs.go:123] Gathering logs for kubelet ...
	I1026 08:31:19.991428  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1026 08:31:20.096405  204716 logs.go:123] Gathering logs for dmesg ...
	I1026 08:31:20.096443  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1026 08:31:20.110367  204716 logs.go:123] Gathering logs for describe nodes ...
	I1026 08:31:20.110392  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1026 08:31:20.172486  204716 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1026 08:31:20.172510  204716 logs.go:123] Gathering logs for kube-apiserver [d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a] ...
	I1026 08:31:20.172524  204716 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d1bb81ae172f1f9c3f407371200f73e09836a6e120d392f7f6c266f4d1e2533a"
	
	
	==> CRI-O <==
	Oct 26 08:30:51 old-k8s-version-810379 crio[563]: time="2025-10-26T08:30:51.121749Z" level=info msg="Created container 8ba7298a29c40dfc8c6704be6dd32b968b23596f2b90249aad7a644173902fb5: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7kfvh/kubernetes-dashboard" id=ec9eae44-7604-49f9-b896-b088c4db63a3 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:30:51 old-k8s-version-810379 crio[563]: time="2025-10-26T08:30:51.122359363Z" level=info msg="Starting container: 8ba7298a29c40dfc8c6704be6dd32b968b23596f2b90249aad7a644173902fb5" id=6bfd9d0f-2db2-4ca3-8491-b5a9734ed83f name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:30:51 old-k8s-version-810379 crio[563]: time="2025-10-26T08:30:51.124001818Z" level=info msg="Started container" PID=1731 containerID=8ba7298a29c40dfc8c6704be6dd32b968b23596f2b90249aad7a644173902fb5 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7kfvh/kubernetes-dashboard id=6bfd9d0f-2db2-4ca3-8491-b5a9734ed83f name=/runtime.v1.RuntimeService/StartContainer sandboxID=597d9b8123579b4a431a49d1015ca7b84edd6f2bfc1e15b15c7363c74bc7abf3
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.920104441Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=24c755b2-7aa9-4ee7-a9f7-dbbfe0e842a3 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.921033434Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=70bbe60f-2262-4000-b839-60d2e369bc7f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.92201144Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=570b4cf8-470e-492a-8877-cd7f30474091 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.922149302Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.926781519Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.926926039Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/89926590478c5943b0f042bf0cbe00f844fb32a97a19e13c9a41c8f466196a3e/merged/etc/passwd: no such file or directory"
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.926948998Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/89926590478c5943b0f042bf0cbe00f844fb32a97a19e13c9a41c8f466196a3e/merged/etc/group: no such file or directory"
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.9273309Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.957623439Z" level=info msg="Created container a05f9bc7d851530f7dd8e58a8eb524b93587e00a90aab02a6c09492b0fb9b25c: kube-system/storage-provisioner/storage-provisioner" id=570b4cf8-470e-492a-8877-cd7f30474091 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.958304679Z" level=info msg="Starting container: a05f9bc7d851530f7dd8e58a8eb524b93587e00a90aab02a6c09492b0fb9b25c" id=70853703-a190-4231-b2e4-6458c48efbde name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:31:03 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:03.961592277Z" level=info msg="Started container" PID=1757 containerID=a05f9bc7d851530f7dd8e58a8eb524b93587e00a90aab02a6c09492b0fb9b25c description=kube-system/storage-provisioner/storage-provisioner id=70853703-a190-4231-b2e4-6458c48efbde name=/runtime.v1.RuntimeService/StartContainer sandboxID=cda4db2edaa1968e664d8aa120f28c7f4e23afae313da61c4ee6d4e049446ea9
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.804101497Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=98d33240-9c7f-4451-bc62-3f8440f25cfa name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.80506713Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=723ca4a6-f8c8-4305-958e-48d9023a2425 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.806063194Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl/dashboard-metrics-scraper" id=59e9ac13-7e98-4588-9f19-fa79cd98c773 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.806195865Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.812789216Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.813530655Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.848134072Z" level=info msg="Created container fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl/dashboard-metrics-scraper" id=59e9ac13-7e98-4588-9f19-fa79cd98c773 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.848823628Z" level=info msg="Starting container: fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d" id=ff88478e-a9fe-472a-8f81-aee38036277e name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.851138539Z" level=info msg="Started container" PID=1788 containerID=fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl/dashboard-metrics-scraper id=ff88478e-a9fe-472a-8f81-aee38036277e name=/runtime.v1.RuntimeService/StartContainer sandboxID=e832e7a50aeb9f2619b125376c79e3e6deadddb7ebbe7eab5247f5c98f5612ae
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.938739514Z" level=info msg="Removing container: 45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d" id=89d1bae9-3965-4478-bbc9-6e7a462d22e9 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:08 old-k8s-version-810379 crio[563]: time="2025-10-26T08:31:08.951317929Z" level=info msg="Removed container 45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl/dashboard-metrics-scraper" id=89d1bae9-3965-4478-bbc9-6e7a462d22e9 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	fc59cd40c6251       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago      Exited              dashboard-metrics-scraper   2                   e832e7a50aeb9       dashboard-metrics-scraper-5f989dc9cf-l92pl       kubernetes-dashboard
	a05f9bc7d8515       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           18 seconds ago      Running             storage-provisioner         1                   cda4db2edaa19       storage-provisioner                              kube-system
	8ba7298a29c40       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   31 seconds ago      Running             kubernetes-dashboard        0                   597d9b8123579       kubernetes-dashboard-8694d4445c-7kfvh            kubernetes-dashboard
	97a9356c65d4e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           49 seconds ago      Running             coredns                     0                   e070d90916789       coredns-5dd5756b68-wrpqk                         kube-system
	2ec7dc5b7e012       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           49 seconds ago      Running             busybox                     1                   3e162c6c4f2cf       busybox                                          default
	31e670af5aeb0       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           49 seconds ago      Running             kube-proxy                  0                   2ad46201f3a03       kube-proxy-455nz                                 kube-system
	f2c64b3865d37       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           49 seconds ago      Running             kindnet-cni                 0                   b1b942a26efe0       kindnet-6mfc2                                    kube-system
	ea4eca76c9673       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           49 seconds ago      Exited              storage-provisioner         0                   cda4db2edaa19       storage-provisioner                              kube-system
	05c780d0419bf       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           52 seconds ago      Running             kube-controller-manager     0                   1042814f0e6b6       kube-controller-manager-old-k8s-version-810379   kube-system
	91140716b117c       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           52 seconds ago      Running             kube-scheduler              0                   dbf9e2ba833da       kube-scheduler-old-k8s-version-810379            kube-system
	8d811096167c8       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           52 seconds ago      Running             etcd                        0                   e3a95fee53b96       etcd-old-k8s-version-810379                      kube-system
	b4b1d14a54456       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           52 seconds ago      Running             kube-apiserver              0                   f38a7d22e2c72       kube-apiserver-old-k8s-version-810379            kube-system
	
	
	==> coredns [97a9356c65d4e3ca11e26338357b00da6fc7933cca8a4c49086bb3cb7e53e47a] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:40779 - 27509 "HINFO IN 1732453957897710394.6348622279188320067. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032397624s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               old-k8s-version-810379
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-810379
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=old-k8s-version-810379
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_29_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:29:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-810379
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:31:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:31:03 +0000   Sun, 26 Oct 2025 08:29:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:31:03 +0000   Sun, 26 Oct 2025 08:29:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:31:03 +0000   Sun, 26 Oct 2025 08:29:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:31:03 +0000   Sun, 26 Oct 2025 08:29:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-810379
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                d265c90b-90d2-4c31-9d3f-ae5ff5d718c0
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-5dd5756b68-wrpqk                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-old-k8s-version-810379                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         117s
	  kube-system                 kindnet-6mfc2                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-old-k8s-version-810379             250m (3%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-controller-manager-old-k8s-version-810379    200m (2%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 kube-proxy-455nz                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-old-k8s-version-810379             100m (1%)     0 (0%)      0 (0%)           0 (0%)         117s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-l92pl        0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-7kfvh             0 (0%)        0 (0%)      0 (0%)           0 (0%)         38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 49s                  kube-proxy       
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-810379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x8 over 2m3s)  kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    117s                 kubelet          Node old-k8s-version-810379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  117s                 kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     117s                 kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientPID
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           106s                 node-controller  Node old-k8s-version-810379 event: Registered Node old-k8s-version-810379 in Controller
	  Normal  NodeReady                92s                  kubelet          Node old-k8s-version-810379 status is now: NodeReady
	  Normal  Starting                 54s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)    kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)    kubelet          Node old-k8s-version-810379 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)    kubelet          Node old-k8s-version-810379 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           38s                  node-controller  Node old-k8s-version-810379 event: Registered Node old-k8s-version-810379 in Controller
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [8d811096167c839c4c04054b21e24c64ba17901168426c75d4408c4ce49c4503] <==
	{"level":"info","ts":"2025-10-26T08:30:30.377382Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","added-peer-id":"dfc97eb0aae75b33","added-peer-peer-urls":["https://192.168.94.2:2380"]}
	{"level":"info","ts":"2025-10-26T08:30:30.377622Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T08:30:30.377722Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-10-26T08:30:30.378829Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-10-26T08:30:30.378944Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-26T08:30:30.379044Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-10-26T08:30:30.379224Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-10-26T08:30:30.379326Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-10-26T08:30:31.668935Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-10-26T08:30:31.668992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-10-26T08:30:31.669012Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-10-26T08:30:31.669028Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-10-26T08:30:31.669036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-10-26T08:30:31.669047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-10-26T08:30:31.669058Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-10-26T08:30:31.670628Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-810379 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-10-26T08:30:31.670637Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T08:30:31.67066Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-10-26T08:30:31.670832Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-10-26T08:30:31.670864Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-10-26T08:30:31.67172Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-10-26T08:30:31.671822Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-10-26T08:30:52.055919Z","caller":"traceutil/trace.go:171","msg":"trace[1036694784] transaction","detail":"{read_only:false; response_revision:626; number_of_response:1; }","duration":"112.796718ms","start":"2025-10-26T08:30:51.943086Z","end":"2025-10-26T08:30:52.055883Z","steps":["trace[1036694784] 'process raft request'  (duration: 112.678044ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:30:52.055935Z","caller":"traceutil/trace.go:171","msg":"trace[650569965] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"114.046878ms","start":"2025-10-26T08:30:51.941869Z","end":"2025-10-26T08:30:52.055916Z","steps":["trace[650569965] 'process raft request'  (duration: 87.915391ms)","trace[650569965] 'compare'  (duration: 25.798373ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:30:52.055937Z","caller":"traceutil/trace.go:171","msg":"trace[1549737321] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"113.876947ms","start":"2025-10-26T08:30:51.942019Z","end":"2025-10-26T08:30:52.055896Z","steps":["trace[1549737321] 'process raft request'  (duration: 113.692049ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:31:23 up  1:13,  0 user,  load average: 3.60, 3.15, 2.01
	Linux old-k8s-version-810379 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [f2c64b3865d37d91db310f0c9a0dbe53668aa164448d5e9153a8a479b8323cad] <==
	I1026 08:30:33.364192       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:30:33.453643       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1026 08:30:33.453798       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:30:33.453820       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:30:33.453841       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:30:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:30:33.656888       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:30:33.656926       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:30:33.656940       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:30:33.657081       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:30:34.057036       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:30:34.057062       1 metrics.go:72] Registering metrics
	I1026 08:30:34.057128       1 controller.go:711] "Syncing nftables rules"
	I1026 08:30:43.658952       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:30:43.659029       1 main.go:301] handling current node
	I1026 08:30:53.657886       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:30:53.657939       1 main.go:301] handling current node
	I1026 08:31:03.657153       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:31:03.657189       1 main.go:301] handling current node
	I1026 08:31:13.659503       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:31:13.659544       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b4b1d14a54456f07311716e84e6ac70140f03e1a062261a56e0d6dd936819cec] <==
	I1026 08:30:32.717534       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:30:32.749903       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1026 08:30:32.772708       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1026 08:30:32.772805       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1026 08:30:32.772912       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 08:30:32.772942       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1026 08:30:32.772961       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1026 08:30:32.773136       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1026 08:30:32.773198       1 aggregator.go:166] initial CRD sync complete...
	I1026 08:30:32.773215       1 autoregister_controller.go:141] Starting autoregister controller
	I1026 08:30:32.773222       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 08:30:32.773229       1 cache.go:39] Caches are synced for autoregister controller
	E1026 08:30:32.778164       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 08:30:32.782773       1 shared_informer.go:318] Caches are synced for configmaps
	I1026 08:30:33.675557       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:30:33.751142       1 controller.go:624] quota admission added evaluator for: namespaces
	I1026 08:30:33.792310       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1026 08:30:33.812870       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:30:33.824274       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:30:33.835630       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1026 08:30:33.904872       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.133.233"}
	I1026 08:30:33.923014       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.36.6"}
	I1026 08:30:45.613553       1 controller.go:624] quota admission added evaluator for: endpoints
	I1026 08:30:45.622985       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:30:45.701815       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [05c780d0419bff37382e6fa31430690a2e55479d8bdba3e10b0e53207ce9c8ea] <==
	I1026 08:30:45.718982       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="12.159676ms"
	I1026 08:30:45.721089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="15.779243ms"
	I1026 08:30:45.726971       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="7.932362ms"
	I1026 08:30:45.727069       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="54.003µs"
	I1026 08:30:45.728402       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.265837ms"
	I1026 08:30:45.728477       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="38.925µs"
	I1026 08:30:45.733009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.727µs"
	I1026 08:30:45.740693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="59.088µs"
	I1026 08:30:45.744717       1 shared_informer.go:318] Caches are synced for cronjob
	I1026 08:30:45.765121       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 08:30:45.794925       1 shared_informer.go:318] Caches are synced for stateful set
	I1026 08:30:45.801525       1 shared_informer.go:318] Caches are synced for disruption
	I1026 08:30:45.803892       1 shared_informer.go:318] Caches are synced for resource quota
	I1026 08:30:46.174931       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 08:30:46.174961       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1026 08:30:46.184121       1 shared_informer.go:318] Caches are synced for garbage collector
	I1026 08:30:48.889509       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="114.252µs"
	I1026 08:30:49.893207       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="133.883µs"
	I1026 08:30:50.950588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="74.299µs"
	I1026 08:30:52.057593       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="117.929097ms"
	I1026 08:30:52.057823       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="86.394µs"
	I1026 08:31:03.785310       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.052548ms"
	I1026 08:31:03.785406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="56.385µs"
	I1026 08:31:08.950292       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.302µs"
	I1026 08:31:16.034173       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="99.076µs"
	
	
	==> kube-proxy [31e670af5aeb033581d00601263cb434e88c2e86d089070c53108a36f7201098] <==
	I1026 08:30:33.249980       1 server_others.go:69] "Using iptables proxy"
	I1026 08:30:33.264714       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1026 08:30:33.292295       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:30:33.294901       1 server_others.go:152] "Using iptables Proxier"
	I1026 08:30:33.294940       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1026 08:30:33.294949       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1026 08:30:33.294989       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1026 08:30:33.295280       1 server.go:846] "Version info" version="v1.28.0"
	I1026 08:30:33.295346       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:30:33.296613       1 config.go:315] "Starting node config controller"
	I1026 08:30:33.296707       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1026 08:30:33.296881       1 config.go:97] "Starting endpoint slice config controller"
	I1026 08:30:33.296914       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1026 08:30:33.297078       1 config.go:188] "Starting service config controller"
	I1026 08:30:33.297228       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1026 08:30:33.396949       1 shared_informer.go:318] Caches are synced for node config
	I1026 08:30:33.397480       1 shared_informer.go:318] Caches are synced for service config
	I1026 08:30:33.397557       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [91140716b117cb4eb2f3c6e149ff401f7197babd90f5e046ace64b14ed25aded] <==
	I1026 08:30:31.057789       1 serving.go:348] Generated self-signed cert in-memory
	I1026 08:30:32.744422       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1026 08:30:32.746295       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:30:32.752748       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1026 08:30:32.752854       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1026 08:30:32.752875       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1026 08:30:32.752894       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1026 08:30:32.753344       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 08:30:32.753405       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1026 08:30:32.754034       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:30:32.754054       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 08:30:32.854455       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1026 08:30:32.854624       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1026 08:30:32.856106       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	
	
	==> kubelet <==
	Oct 26 08:30:45 old-k8s-version-810379 kubelet[720]: I1026 08:30:45.791818     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6b85d1f8-06ed-4998-bad2-19ba60a53a1f-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-7kfvh\" (UID: \"6b85d1f8-06ed-4998-bad2-19ba60a53a1f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7kfvh"
	Oct 26 08:30:45 old-k8s-version-810379 kubelet[720]: I1026 08:30:45.791868     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5jgk\" (UniqueName: \"kubernetes.io/projected/6b85d1f8-06ed-4998-bad2-19ba60a53a1f-kube-api-access-d5jgk\") pod \"kubernetes-dashboard-8694d4445c-7kfvh\" (UID: \"6b85d1f8-06ed-4998-bad2-19ba60a53a1f\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7kfvh"
	Oct 26 08:30:45 old-k8s-version-810379 kubelet[720]: I1026 08:30:45.791897     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1b7d4875-4cc0-430e-b814-d8c405201f19-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-l92pl\" (UID: \"1b7d4875-4cc0-430e-b814-d8c405201f19\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl"
	Oct 26 08:30:45 old-k8s-version-810379 kubelet[720]: I1026 08:30:45.791918     720 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5dq7\" (UniqueName: \"kubernetes.io/projected/1b7d4875-4cc0-430e-b814-d8c405201f19-kube-api-access-j5dq7\") pod \"dashboard-metrics-scraper-5f989dc9cf-l92pl\" (UID: \"1b7d4875-4cc0-430e-b814-d8c405201f19\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl"
	Oct 26 08:30:48 old-k8s-version-810379 kubelet[720]: I1026 08:30:48.875324     720 scope.go:117] "RemoveContainer" containerID="3dce51d3ce60cb6e9dd7a6a7e9ba3721431364c56d8edfe1cb5b2be32c73a1ed"
	Oct 26 08:30:49 old-k8s-version-810379 kubelet[720]: I1026 08:30:49.879916     720 scope.go:117] "RemoveContainer" containerID="3dce51d3ce60cb6e9dd7a6a7e9ba3721431364c56d8edfe1cb5b2be32c73a1ed"
	Oct 26 08:30:49 old-k8s-version-810379 kubelet[720]: I1026 08:30:49.880099     720 scope.go:117] "RemoveContainer" containerID="45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d"
	Oct 26 08:30:49 old-k8s-version-810379 kubelet[720]: E1026 08:30:49.880467     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l92pl_kubernetes-dashboard(1b7d4875-4cc0-430e-b814-d8c405201f19)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl" podUID="1b7d4875-4cc0-430e-b814-d8c405201f19"
	Oct 26 08:30:50 old-k8s-version-810379 kubelet[720]: I1026 08:30:50.883644     720 scope.go:117] "RemoveContainer" containerID="45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d"
	Oct 26 08:30:50 old-k8s-version-810379 kubelet[720]: E1026 08:30:50.884069     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l92pl_kubernetes-dashboard(1b7d4875-4cc0-430e-b814-d8c405201f19)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl" podUID="1b7d4875-4cc0-430e-b814-d8c405201f19"
	Oct 26 08:30:51 old-k8s-version-810379 kubelet[720]: I1026 08:30:51.939752     720 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-7kfvh" podStartSLOduration=1.8956233949999999 podCreationTimestamp="2025-10-26 08:30:45 +0000 UTC" firstStartedPulling="2025-10-26 08:30:46.044589724 +0000 UTC m=+16.334859118" lastFinishedPulling="2025-10-26 08:30:51.088654964 +0000 UTC m=+21.378924372" observedRunningTime="2025-10-26 08:30:51.939273554 +0000 UTC m=+22.229542965" watchObservedRunningTime="2025-10-26 08:30:51.939688649 +0000 UTC m=+22.229958060"
	Oct 26 08:30:56 old-k8s-version-810379 kubelet[720]: I1026 08:30:56.024220     720 scope.go:117] "RemoveContainer" containerID="45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d"
	Oct 26 08:30:56 old-k8s-version-810379 kubelet[720]: E1026 08:30:56.024697     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l92pl_kubernetes-dashboard(1b7d4875-4cc0-430e-b814-d8c405201f19)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl" podUID="1b7d4875-4cc0-430e-b814-d8c405201f19"
	Oct 26 08:31:03 old-k8s-version-810379 kubelet[720]: I1026 08:31:03.919613     720 scope.go:117] "RemoveContainer" containerID="ea4eca76c9673325cd454564d401e8f313d8b039a3881a24a985be812f2998d5"
	Oct 26 08:31:08 old-k8s-version-810379 kubelet[720]: I1026 08:31:08.803177     720 scope.go:117] "RemoveContainer" containerID="45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d"
	Oct 26 08:31:08 old-k8s-version-810379 kubelet[720]: I1026 08:31:08.937190     720 scope.go:117] "RemoveContainer" containerID="45fdd8aa398729dd194a9e8b2da6fd01fb1b943351ad828e873abe6cf6e7164d"
	Oct 26 08:31:08 old-k8s-version-810379 kubelet[720]: I1026 08:31:08.937450     720 scope.go:117] "RemoveContainer" containerID="fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d"
	Oct 26 08:31:08 old-k8s-version-810379 kubelet[720]: E1026 08:31:08.937816     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l92pl_kubernetes-dashboard(1b7d4875-4cc0-430e-b814-d8c405201f19)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl" podUID="1b7d4875-4cc0-430e-b814-d8c405201f19"
	Oct 26 08:31:16 old-k8s-version-810379 kubelet[720]: I1026 08:31:16.023857     720 scope.go:117] "RemoveContainer" containerID="fc59cd40c6251ba059595d6a8ed25d6d41cfc6efb405c0a0bb7d796d2b7cb35d"
	Oct 26 08:31:16 old-k8s-version-810379 kubelet[720]: E1026 08:31:16.024285     720 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-l92pl_kubernetes-dashboard(1b7d4875-4cc0-430e-b814-d8c405201f19)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-l92pl" podUID="1b7d4875-4cc0-430e-b814-d8c405201f19"
	Oct 26 08:31:17 old-k8s-version-810379 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 08:31:17 old-k8s-version-810379 kubelet[720]: I1026 08:31:17.619906     720 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 26 08:31:17 old-k8s-version-810379 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 08:31:17 old-k8s-version-810379 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 08:31:17 old-k8s-version-810379 systemd[1]: kubelet.service: Consumed 1.463s CPU time.
	
	
	==> kubernetes-dashboard [8ba7298a29c40dfc8c6704be6dd32b968b23596f2b90249aad7a644173902fb5] <==
	2025/10/26 08:30:51 Using namespace: kubernetes-dashboard
	2025/10/26 08:30:51 Using in-cluster config to connect to apiserver
	2025/10/26 08:30:51 Using secret token for csrf signing
	2025/10/26 08:30:51 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 08:30:51 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 08:30:51 Successful initial request to the apiserver, version: v1.28.0
	2025/10/26 08:30:51 Generating JWE encryption key
	2025/10/26 08:30:51 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 08:30:51 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 08:30:51 Initializing JWE encryption key from synchronized object
	2025/10/26 08:30:51 Creating in-cluster Sidecar client
	2025/10/26 08:30:51 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:30:51 Serving insecurely on HTTP port: 9090
	2025/10/26 08:31:21 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:30:51 Starting overwatch
	
	
	==> storage-provisioner [a05f9bc7d851530f7dd8e58a8eb524b93587e00a90aab02a6c09492b0fb9b25c] <==
	I1026 08:31:03.973483       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 08:31:03.980987       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 08:31:03.981038       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1026 08:31:21.378332       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 08:31:21.378430       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"0ea5de82-4240-490f-8eb1-9a5d824d3381", APIVersion:"v1", ResourceVersion:"657", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-810379_fabee20e-5738-4427-a378-64856a10ad5e became leader
	I1026 08:31:21.378492       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-810379_fabee20e-5738-4427-a378-64856a10ad5e!
	I1026 08:31:21.479440       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-810379_fabee20e-5738-4427-a378-64856a10ad5e!
	
	
	==> storage-provisioner [ea4eca76c9673325cd454564d401e8f313d8b039a3881a24a985be812f2998d5] <==
	I1026 08:30:33.202085       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 08:31:03.204723       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-810379 -n old-k8s-version-810379
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-810379 -n old-k8s-version-810379: exit status 2 (406.048392ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-810379 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.89s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-001983 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-001983 --alsologtostderr -v=1: exit status 80 (2.534643724s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-001983 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:31:49.745865  268915 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:31:49.746159  268915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:49.746165  268915 out.go:374] Setting ErrFile to fd 2...
	I1026 08:31:49.746169  268915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:49.746454  268915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:31:49.746758  268915 out.go:368] Setting JSON to false
	I1026 08:31:49.746800  268915 mustload.go:65] Loading cluster: no-preload-001983
	I1026 08:31:49.747241  268915 config.go:182] Loaded profile config "no-preload-001983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:49.747755  268915 cli_runner.go:164] Run: docker container inspect no-preload-001983 --format={{.State.Status}}
	I1026 08:31:49.772179  268915 host.go:66] Checking if "no-preload-001983" exists ...
	I1026 08:31:49.772482  268915 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:31:49.844608  268915 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-26 08:31:49.832403368 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:31:49.845810  268915 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-001983 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 08:31:49.847705  268915 out.go:179] * Pausing node no-preload-001983 ... 
	I1026 08:31:49.849505  268915 host.go:66] Checking if "no-preload-001983" exists ...
	I1026 08:31:49.849860  268915 ssh_runner.go:195] Run: systemctl --version
	I1026 08:31:49.849916  268915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-001983
	I1026 08:31:49.871706  268915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33073 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/no-preload-001983/id_rsa Username:docker}
	I1026 08:31:49.974598  268915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:49.990719  268915 pause.go:52] kubelet running: true
	I1026 08:31:49.990803  268915 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:31:50.191071  268915 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:31:50.191152  268915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:31:50.275402  268915 cri.go:89] found id: "09a9bd3e1f32e9950c69d47307d8f5caef265ec9351e995a734464184843e075"
	I1026 08:31:50.275429  268915 cri.go:89] found id: "d67644d288b5ba279cc7cb9b7107221732bfb60312f8f31fd45cd95cfef849ae"
	I1026 08:31:50.275436  268915 cri.go:89] found id: "ad1aac48cb866a8a429a901092cbedec57ca9cb5db6edd6939b3c2894e0dda25"
	I1026 08:31:50.275441  268915 cri.go:89] found id: "529f576c97f1ad3986c3ed57f6c2cfc78ae1d8f80bb553d59fbb6bfbe2e05dee"
	I1026 08:31:50.275446  268915 cri.go:89] found id: "2da6a31b2b449b76a6fad36286c9ef2883f28efd94b9fc8093f8f1dc49c00f7e"
	I1026 08:31:50.275452  268915 cri.go:89] found id: "4c584459a8b9ceee81272b11057c6992b6445414d13db7978d48dece06c956e1"
	I1026 08:31:50.275456  268915 cri.go:89] found id: "895b68d06c4c842bc1c2cab1766e76fb423dcd76ef2a2caa87c3d26070e83456"
	I1026 08:31:50.275461  268915 cri.go:89] found id: "9af37f96ad50d506190b4c623adb174e57b1595cf1697f17669021e88201d00e"
	I1026 08:31:50.275465  268915 cri.go:89] found id: "09efe5a8a887a3172db87ced2e163334f36f6661f8d12e7e6ad96c8dd5c8fdeb"
	I1026 08:31:50.275473  268915 cri.go:89] found id: "a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64"
	I1026 08:31:50.275477  268915 cri.go:89] found id: "afc141c8a034d2f7011113758f37bcb772b61ac78d0e9bfaaacce15188956d75"
	I1026 08:31:50.275489  268915 cri.go:89] found id: ""
	I1026 08:31:50.275541  268915 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:31:50.289152  268915 retry.go:31] will retry after 267.752866ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:50Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:31:50.557498  268915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:50.571235  268915 pause.go:52] kubelet running: false
	I1026 08:31:50.571304  268915 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:31:50.725401  268915 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:31:50.725529  268915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:31:50.802180  268915 cri.go:89] found id: "09a9bd3e1f32e9950c69d47307d8f5caef265ec9351e995a734464184843e075"
	I1026 08:31:50.802200  268915 cri.go:89] found id: "d67644d288b5ba279cc7cb9b7107221732bfb60312f8f31fd45cd95cfef849ae"
	I1026 08:31:50.802203  268915 cri.go:89] found id: "ad1aac48cb866a8a429a901092cbedec57ca9cb5db6edd6939b3c2894e0dda25"
	I1026 08:31:50.802206  268915 cri.go:89] found id: "529f576c97f1ad3986c3ed57f6c2cfc78ae1d8f80bb553d59fbb6bfbe2e05dee"
	I1026 08:31:50.802208  268915 cri.go:89] found id: "2da6a31b2b449b76a6fad36286c9ef2883f28efd94b9fc8093f8f1dc49c00f7e"
	I1026 08:31:50.802211  268915 cri.go:89] found id: "4c584459a8b9ceee81272b11057c6992b6445414d13db7978d48dece06c956e1"
	I1026 08:31:50.802214  268915 cri.go:89] found id: "895b68d06c4c842bc1c2cab1766e76fb423dcd76ef2a2caa87c3d26070e83456"
	I1026 08:31:50.802223  268915 cri.go:89] found id: "9af37f96ad50d506190b4c623adb174e57b1595cf1697f17669021e88201d00e"
	I1026 08:31:50.802225  268915 cri.go:89] found id: "09efe5a8a887a3172db87ced2e163334f36f6661f8d12e7e6ad96c8dd5c8fdeb"
	I1026 08:31:50.802237  268915 cri.go:89] found id: "a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64"
	I1026 08:31:50.802241  268915 cri.go:89] found id: "afc141c8a034d2f7011113758f37bcb772b61ac78d0e9bfaaacce15188956d75"
	I1026 08:31:50.802243  268915 cri.go:89] found id: ""
	I1026 08:31:50.802297  268915 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:31:50.814552  268915 retry.go:31] will retry after 519.827445ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:50Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:31:51.335290  268915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:51.349699  268915 pause.go:52] kubelet running: false
	I1026 08:31:51.349754  268915 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:31:51.500395  268915 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:31:51.500489  268915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:31:51.570815  268915 cri.go:89] found id: "09a9bd3e1f32e9950c69d47307d8f5caef265ec9351e995a734464184843e075"
	I1026 08:31:51.570842  268915 cri.go:89] found id: "d67644d288b5ba279cc7cb9b7107221732bfb60312f8f31fd45cd95cfef849ae"
	I1026 08:31:51.570855  268915 cri.go:89] found id: "ad1aac48cb866a8a429a901092cbedec57ca9cb5db6edd6939b3c2894e0dda25"
	I1026 08:31:51.570860  268915 cri.go:89] found id: "529f576c97f1ad3986c3ed57f6c2cfc78ae1d8f80bb553d59fbb6bfbe2e05dee"
	I1026 08:31:51.570864  268915 cri.go:89] found id: "2da6a31b2b449b76a6fad36286c9ef2883f28efd94b9fc8093f8f1dc49c00f7e"
	I1026 08:31:51.570868  268915 cri.go:89] found id: "4c584459a8b9ceee81272b11057c6992b6445414d13db7978d48dece06c956e1"
	I1026 08:31:51.570872  268915 cri.go:89] found id: "895b68d06c4c842bc1c2cab1766e76fb423dcd76ef2a2caa87c3d26070e83456"
	I1026 08:31:51.570875  268915 cri.go:89] found id: "9af37f96ad50d506190b4c623adb174e57b1595cf1697f17669021e88201d00e"
	I1026 08:31:51.570878  268915 cri.go:89] found id: "09efe5a8a887a3172db87ced2e163334f36f6661f8d12e7e6ad96c8dd5c8fdeb"
	I1026 08:31:51.570891  268915 cri.go:89] found id: "a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64"
	I1026 08:31:51.570899  268915 cri.go:89] found id: "afc141c8a034d2f7011113758f37bcb772b61ac78d0e9bfaaacce15188956d75"
	I1026 08:31:51.570903  268915 cri.go:89] found id: ""
	I1026 08:31:51.570946  268915 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:31:51.583196  268915 retry.go:31] will retry after 342.786562ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:51Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:31:51.926844  268915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:31:51.941434  268915 pause.go:52] kubelet running: false
	I1026 08:31:51.941499  268915 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:31:52.103495  268915 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:31:52.103574  268915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:31:52.177572  268915 cri.go:89] found id: "09a9bd3e1f32e9950c69d47307d8f5caef265ec9351e995a734464184843e075"
	I1026 08:31:52.177598  268915 cri.go:89] found id: "d67644d288b5ba279cc7cb9b7107221732bfb60312f8f31fd45cd95cfef849ae"
	I1026 08:31:52.177604  268915 cri.go:89] found id: "ad1aac48cb866a8a429a901092cbedec57ca9cb5db6edd6939b3c2894e0dda25"
	I1026 08:31:52.177608  268915 cri.go:89] found id: "529f576c97f1ad3986c3ed57f6c2cfc78ae1d8f80bb553d59fbb6bfbe2e05dee"
	I1026 08:31:52.177612  268915 cri.go:89] found id: "2da6a31b2b449b76a6fad36286c9ef2883f28efd94b9fc8093f8f1dc49c00f7e"
	I1026 08:31:52.177616  268915 cri.go:89] found id: "4c584459a8b9ceee81272b11057c6992b6445414d13db7978d48dece06c956e1"
	I1026 08:31:52.177621  268915 cri.go:89] found id: "895b68d06c4c842bc1c2cab1766e76fb423dcd76ef2a2caa87c3d26070e83456"
	I1026 08:31:52.177625  268915 cri.go:89] found id: "9af37f96ad50d506190b4c623adb174e57b1595cf1697f17669021e88201d00e"
	I1026 08:31:52.177628  268915 cri.go:89] found id: "09efe5a8a887a3172db87ced2e163334f36f6661f8d12e7e6ad96c8dd5c8fdeb"
	I1026 08:31:52.177643  268915 cri.go:89] found id: "a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64"
	I1026 08:31:52.177648  268915 cri.go:89] found id: "afc141c8a034d2f7011113758f37bcb772b61ac78d0e9bfaaacce15188956d75"
	I1026 08:31:52.177652  268915 cri.go:89] found id: ""
	I1026 08:31:52.177699  268915 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:31:52.194143  268915 out.go:203] 
	W1026 08:31:52.195368  268915 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:52Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:52Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:31:52.195392  268915 out.go:285] * 
	* 
	W1026 08:31:52.200185  268915 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:31:52.201493  268915 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-001983 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-001983
helpers_test.go:243: (dbg) docker inspect no-preload-001983:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6",
	        "Created": "2025-10-26T08:29:35.306793049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255621,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:30:52.742066608Z",
	            "FinishedAt": "2025-10-26T08:30:51.415359556Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6/hosts",
	        "LogPath": "/var/lib/docker/containers/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6-json.log",
	        "Name": "/no-preload-001983",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-001983:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-001983",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6",
	                "LowerDir": "/var/lib/docker/overlay2/635c7ae8fdcb97ab370d4b345349b0cab3ee9a001eb19ea34208ab5ebca1fde4-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/635c7ae8fdcb97ab370d4b345349b0cab3ee9a001eb19ea34208ab5ebca1fde4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/635c7ae8fdcb97ab370d4b345349b0cab3ee9a001eb19ea34208ab5ebca1fde4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/635c7ae8fdcb97ab370d4b345349b0cab3ee9a001eb19ea34208ab5ebca1fde4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-001983",
	                "Source": "/var/lib/docker/volumes/no-preload-001983/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-001983",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-001983",
	                "name.minikube.sigs.k8s.io": "no-preload-001983",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d47042a6342050fc62f7bf7b362650e5e9c06e1961e22ef8c0aa82c25f4ae2a",
	            "SandboxKey": "/var/run/docker/netns/2d47042a6342",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-001983": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:53:86:be:53:35",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0bdb8ca3ba1ed8384cb0d6339c847a03d4b5a80b703fdd60e4df4eb3b0fbcff7",
	                    "EndpointID": "ed4446bd4dfc868a4827b00f11da8dedfb97fd1ba07c8fba824a3e14183c8419",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-001983",
	                        "1c02a7265549"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-001983 -n no-preload-001983
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-001983 -n no-preload-001983: exit status 2 (358.961272ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-001983 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-001983 logs -n 25: (1.298851568s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p NoKubernetes-815548                                                                                                                                                                                                                        │ NoKubernetes-815548          │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:29 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-810379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p old-k8s-version-810379 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-810379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-001983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p no-preload-001983 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable metrics-server -p embed-certs-752315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p embed-certs-752315 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable dashboard -p no-preload-001983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-752315 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ image   │ old-k8s-version-810379 image list --format=json                                                                                                                                                                                               │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p old-k8s-version-810379 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p old-k8s-version-810379                                                                                                                                                                                                                     │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ delete  │ -p old-k8s-version-810379                                                                                                                                                                                                                     │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ delete  │ -p disable-driver-mounts-209240                                                                                                                                                                                                               │ disable-driver-mounts-209240 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p default-k8s-diff-port-866212 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ image   │ no-preload-001983 image list --format=json                                                                                                                                                                                                    │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p no-preload-001983 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-462840                                                                                                                                                                                                                  │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:31:45
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:31:45.350330  267704 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:31:45.350653  267704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:45.350665  267704 out.go:374] Setting ErrFile to fd 2...
	I1026 08:31:45.350671  267704 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:45.350930  267704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:31:45.351464  267704 out.go:368] Setting JSON to false
	I1026 08:31:45.352925  267704 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4456,"bootTime":1761463049,"procs":365,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:31:45.353050  267704 start.go:141] virtualization: kvm guest
	I1026 08:31:45.356427  267704 out.go:179] * [kubernetes-upgrade-462840] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:31:45.358194  267704 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:31:45.358187  267704 notify.go:220] Checking for updates...
	I1026 08:31:45.360807  267704 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:31:45.362275  267704 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:31:45.363875  267704 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:31:45.365712  267704 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:31:45.367282  267704 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:31:45.369298  267704 config.go:182] Loaded profile config "kubernetes-upgrade-462840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:45.369767  267704 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:31:45.403208  267704 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:31:45.403325  267704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:31:45.473282  267704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-26 08:31:45.461985081 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:31:45.473422  267704 docker.go:318] overlay module found
	I1026 08:31:45.476279  267704 out.go:179] * Using the docker driver based on existing profile
	I1026 08:31:45.477635  267704 start.go:305] selected driver: docker
	I1026 08:31:45.477654  267704 start.go:925] validating driver "docker" against &{Name:kubernetes-upgrade-462840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-462840 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:31:45.477750  267704 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:31:45.478595  267704 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:31:45.547182  267704 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-10-26 08:31:45.53388821 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:31:45.547582  267704 cni.go:84] Creating CNI manager for ""
	I1026 08:31:45.547657  267704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:31:45.547703  267704 start.go:349] cluster config:
	{Name:kubernetes-upgrade-462840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-462840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:31:45.549941  267704 out.go:179] * Starting "kubernetes-upgrade-462840" primary control-plane node in "kubernetes-upgrade-462840" cluster
	I1026 08:31:45.551235  267704 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:31:45.552782  267704 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:31:45.554122  267704 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:31:45.554155  267704 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:31:45.554163  267704 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:31:45.554184  267704 cache.go:58] Caching tarball of preloaded images
	I1026 08:31:45.554301  267704 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:31:45.554316  267704 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:31:45.554433  267704 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/config.json ...
	I1026 08:31:45.580707  267704 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:31:45.580733  267704 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:31:45.580752  267704 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:31:45.580779  267704 start.go:360] acquireMachinesLock for kubernetes-upgrade-462840: {Name:mkd80f24e37729d329fe777d33e3092e56a7a873 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:31:45.580844  267704 start.go:364] duration metric: took 45.208µs to acquireMachinesLock for "kubernetes-upgrade-462840"
	I1026 08:31:45.580869  267704 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:31:45.580879  267704 fix.go:54] fixHost starting: 
	I1026 08:31:45.581147  267704 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-462840 --format={{.State.Status}}
	I1026 08:31:45.603291  267704 fix.go:112] recreateIfNeeded on kubernetes-upgrade-462840: state=Running err=<nil>
	W1026 08:31:45.603327  267704 fix.go:138] unexpected machine state, will restart: <nil>
	W1026 08:31:42.483111  258469 pod_ready.go:104] pod "coredns-66bc5c9577-jktn8" is not "Ready", error: <nil>
	W1026 08:31:44.484672  258469 pod_ready.go:104] pod "coredns-66bc5c9577-jktn8" is not "Ready", error: <nil>
	I1026 08:31:45.605540  267704 out.go:252] * Updating the running docker "kubernetes-upgrade-462840" container ...
	I1026 08:31:45.605575  267704 machine.go:93] provisionDockerMachine start ...
	I1026 08:31:45.605660  267704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-462840
	I1026 08:31:45.631407  267704 main.go:141] libmachine: Using SSH client type: native
	I1026 08:31:45.631746  267704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1026 08:31:45.631759  267704 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:31:45.778563  267704 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-462840
	
	I1026 08:31:45.778592  267704 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-462840"
	I1026 08:31:45.778653  267704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-462840
	I1026 08:31:45.798014  267704 main.go:141] libmachine: Using SSH client type: native
	I1026 08:31:45.798310  267704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1026 08:31:45.798331  267704 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-462840 && echo "kubernetes-upgrade-462840" | sudo tee /etc/hostname
	I1026 08:31:45.954048  267704 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-462840
	
	I1026 08:31:45.954137  267704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-462840
	I1026 08:31:45.981067  267704 main.go:141] libmachine: Using SSH client type: native
	I1026 08:31:45.982042  267704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1026 08:31:45.982096  267704 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-462840' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-462840/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-462840' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:31:46.131857  267704 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:31:46.131886  267704 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:31:46.131903  267704 ubuntu.go:190] setting up certificates
	I1026 08:31:46.131912  267704 provision.go:84] configureAuth start
	I1026 08:31:46.131978  267704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-462840
	I1026 08:31:46.153345  267704 provision.go:143] copyHostCerts
	I1026 08:31:46.153436  267704 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:31:46.153458  267704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:31:46.153538  267704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:31:46.153649  267704 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:31:46.153660  267704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:31:46.153697  267704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:31:46.153769  267704 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:31:46.153774  267704 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:31:46.153804  267704 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:31:46.153872  267704 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-462840 san=[127.0.0.1 192.168.85.2 kubernetes-upgrade-462840 localhost minikube]
	I1026 08:31:46.332812  267704 provision.go:177] copyRemoteCerts
	I1026 08:31:46.332872  267704 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:31:46.332940  267704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-462840
	I1026 08:31:46.350579  267704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/kubernetes-upgrade-462840/id_rsa Username:docker}
	I1026 08:31:46.453708  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:31:46.472144  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I1026 08:31:46.490898  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:31:46.509213  267704 provision.go:87] duration metric: took 377.290614ms to configureAuth
	I1026 08:31:46.509240  267704 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:31:46.509441  267704 config.go:182] Loaded profile config "kubernetes-upgrade-462840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:46.509555  267704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-462840
	I1026 08:31:46.530450  267704 main.go:141] libmachine: Using SSH client type: native
	I1026 08:31:46.530682  267704 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33008 <nil> <nil>}
	I1026 08:31:46.530708  267704 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:31:47.030115  267704 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:31:47.030146  267704 machine.go:96] duration metric: took 1.4245627s to provisionDockerMachine
	I1026 08:31:47.030161  267704 start.go:293] postStartSetup for "kubernetes-upgrade-462840" (driver="docker")
	I1026 08:31:47.030174  267704 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:31:47.030275  267704 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:31:47.030326  267704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-462840
	I1026 08:31:47.048719  267704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/kubernetes-upgrade-462840/id_rsa Username:docker}
	I1026 08:31:47.151363  267704 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:31:47.156340  267704 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:31:47.156378  267704 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:31:47.156392  267704 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:31:47.156446  267704 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:31:47.156563  267704 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:31:47.156696  267704 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:31:47.167333  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:31:47.190078  267704 start.go:296] duration metric: took 159.894207ms for postStartSetup
	I1026 08:31:47.190180  267704 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:31:47.190229  267704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-462840
	I1026 08:31:47.212557  267704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/kubernetes-upgrade-462840/id_rsa Username:docker}
	I1026 08:31:47.317052  267704 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:31:47.323109  267704 fix.go:56] duration metric: took 1.742224912s for fixHost
	I1026 08:31:47.323139  267704 start.go:83] releasing machines lock for "kubernetes-upgrade-462840", held for 1.742281981s
	I1026 08:31:47.323210  267704 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-462840
	I1026 08:31:47.346184  267704 ssh_runner.go:195] Run: cat /version.json
	I1026 08:31:47.346206  267704 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:31:47.346243  267704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-462840
	I1026 08:31:47.346288  267704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-462840
	I1026 08:31:47.369124  267704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/kubernetes-upgrade-462840/id_rsa Username:docker}
	I1026 08:31:47.370065  267704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/kubernetes-upgrade-462840/id_rsa Username:docker}
	I1026 08:31:47.551093  267704 ssh_runner.go:195] Run: systemctl --version
	I1026 08:31:47.560118  267704 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:31:47.603556  267704 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:31:47.609655  267704 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:31:47.609720  267704 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:31:47.622192  267704 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:31:47.622218  267704 start.go:495] detecting cgroup driver to use...
	I1026 08:31:47.622285  267704 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:31:47.622330  267704 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:31:47.643513  267704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:31:47.660459  267704 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:31:47.660519  267704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:31:47.680939  267704 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:31:47.698061  267704 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:31:47.835614  267704 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:31:47.943978  267704 docker.go:234] disabling docker service ...
	I1026 08:31:47.944039  267704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:31:47.958281  267704 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:31:47.971194  267704 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:31:48.082209  267704 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:31:48.182152  267704 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:31:48.195874  267704 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:31:48.213513  267704 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:31:48.213576  267704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:48.225375  267704 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:31:48.225439  267704 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:48.236108  267704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:48.246635  267704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:48.258406  267704 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:31:48.268696  267704 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:48.280320  267704 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:48.289950  267704 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:31:48.300230  267704 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:31:48.307825  267704 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:31:48.315931  267704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:31:48.424630  267704 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:31:48.589167  267704 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:31:48.589237  267704 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:31:48.594065  267704 start.go:563] Will wait 60s for crictl version
	I1026 08:31:48.594127  267704 ssh_runner.go:195] Run: which crictl
	I1026 08:31:48.599394  267704 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:31:48.632077  267704 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:31:48.632156  267704 ssh_runner.go:195] Run: crio --version
	I1026 08:31:48.676737  267704 ssh_runner.go:195] Run: crio --version
	I1026 08:31:48.716230  267704 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:31:49.411676  264509 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 08:31:49.411752  264509 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 08:31:49.411879  264509 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 08:31:49.411952  264509 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 08:31:49.412009  264509 kubeadm.go:318] OS: Linux
	I1026 08:31:49.412095  264509 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 08:31:49.412180  264509 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 08:31:49.412244  264509 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 08:31:49.412385  264509 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 08:31:49.412501  264509 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 08:31:49.412592  264509 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 08:31:49.412650  264509 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 08:31:49.412698  264509 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 08:31:49.412776  264509 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 08:31:49.412878  264509 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 08:31:49.412990  264509 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 08:31:49.413071  264509 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 08:31:49.415485  264509 out.go:252]   - Generating certificates and keys ...
	I1026 08:31:49.415602  264509 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 08:31:49.415702  264509 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 08:31:49.415825  264509 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 08:31:49.415924  264509 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 08:31:49.416023  264509 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 08:31:49.416102  264509 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 08:31:49.416184  264509 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 08:31:49.416383  264509 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-866212 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1026 08:31:49.416479  264509 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 08:31:49.416663  264509 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-866212 localhost] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1026 08:31:49.416757  264509 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 08:31:49.416892  264509 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 08:31:49.417047  264509 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 08:31:49.417127  264509 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 08:31:49.417199  264509 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 08:31:49.417342  264509 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 08:31:49.417413  264509 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 08:31:49.417498  264509 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 08:31:49.417568  264509 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 08:31:49.417671  264509 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 08:31:49.417768  264509 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 08:31:49.419143  264509 out.go:252]   - Booting up control plane ...
	I1026 08:31:49.419284  264509 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 08:31:49.419414  264509 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 08:31:49.419499  264509 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 08:31:49.419701  264509 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 08:31:49.419833  264509 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 08:31:49.419982  264509 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 08:31:49.420115  264509 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 08:31:49.420170  264509 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 08:31:49.420351  264509 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 08:31:49.420487  264509 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 08:31:49.420562  264509 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000898148s
	I1026 08:31:49.420685  264509 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 08:31:49.420803  264509 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.94.2:8444/livez
	I1026 08:31:49.420923  264509 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 08:31:49.421092  264509 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 08:31:49.421203  264509 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.382838942s
	I1026 08:31:49.421317  264509 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.219536742s
	I1026 08:31:49.421419  264509 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00163438s
	I1026 08:31:49.421568  264509 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:31:49.421754  264509 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:31:49.421849  264509 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:31:49.422178  264509 kubeadm.go:318] [mark-control-plane] Marking the node default-k8s-diff-port-866212 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:31:49.422274  264509 kubeadm.go:318] [bootstrap-token] Using token: fr2cc9.i0xcqspncm0oesw1
	I1026 08:31:49.424176  264509 out.go:252]   - Configuring RBAC rules ...
	I1026 08:31:49.425488  264509 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 08:31:49.425608  264509 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 08:31:49.425788  264509 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 08:31:49.425958  264509 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 08:31:49.426118  264509 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 08:31:49.426226  264509 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 08:31:49.426380  264509 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 08:31:49.426438  264509 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 08:31:49.426500  264509 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 08:31:49.426505  264509 kubeadm.go:318] 
	I1026 08:31:49.426584  264509 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 08:31:49.426590  264509 kubeadm.go:318] 
	I1026 08:31:49.426688  264509 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 08:31:49.426693  264509 kubeadm.go:318] 
	I1026 08:31:49.426725  264509 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 08:31:49.426798  264509 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 08:31:49.426862  264509 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 08:31:49.426867  264509 kubeadm.go:318] 
	I1026 08:31:49.426938  264509 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 08:31:49.426943  264509 kubeadm.go:318] 
	I1026 08:31:49.427012  264509 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 08:31:49.427017  264509 kubeadm.go:318] 
	I1026 08:31:49.427085  264509 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 08:31:49.427177  264509 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 08:31:49.427274  264509 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 08:31:49.427280  264509 kubeadm.go:318] 
	I1026 08:31:49.427393  264509 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 08:31:49.427488  264509 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 08:31:49.427494  264509 kubeadm.go:318] 
	I1026 08:31:49.427603  264509 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8444 --token fr2cc9.i0xcqspncm0oesw1 \
	I1026 08:31:49.427724  264509 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d \
	I1026 08:31:49.427750  264509 kubeadm.go:318] 	--control-plane 
	I1026 08:31:49.427755  264509 kubeadm.go:318] 
	I1026 08:31:49.427859  264509 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 08:31:49.427867  264509 kubeadm.go:318] 
	I1026 08:31:49.427973  264509 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8444 --token fr2cc9.i0xcqspncm0oesw1 \
	I1026 08:31:49.428137  264509 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d 
	I1026 08:31:49.428148  264509 cni.go:84] Creating CNI manager for ""
	I1026 08:31:49.428156  264509 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:31:49.429914  264509 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 08:31:48.717491  267704 cli_runner.go:164] Run: docker network inspect kubernetes-upgrade-462840 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:31:48.737600  267704 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 08:31:48.742370  267704 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-462840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-462840 Namespace:default APIServerHAVIP: APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:31:48.742497  267704 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:31:48.742558  267704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:31:48.777492  267704 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:31:48.777521  267704 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:31:48.777580  267704 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:31:48.810451  267704 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:31:48.810470  267704 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:31:48.810476  267704 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 08:31:48.810599  267704 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=kubernetes-upgrade-462840 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-462840 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:31:48.810683  267704 ssh_runner.go:195] Run: crio config
	I1026 08:31:48.877806  267704 cni.go:84] Creating CNI manager for ""
	I1026 08:31:48.877833  267704 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:31:48.877849  267704 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:31:48.877874  267704 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-462840 NodeName:kubernetes-upgrade-462840 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt Sta
ticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:31:48.878004  267704 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-462840"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:31:48.878078  267704 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:31:48.886436  267704 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:31:48.886496  267704 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:31:48.894955  267704 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (375 bytes)
	I1026 08:31:48.910267  267704 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:31:48.924605  267704 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2221 bytes)
	I1026 08:31:48.938654  267704 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:31:48.943568  267704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:31:49.070124  267704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:31:49.095920  267704 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840 for IP: 192.168.85.2
	I1026 08:31:49.095943  267704 certs.go:195] generating shared ca certs ...
	I1026 08:31:49.095962  267704 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:49.096119  267704 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:31:49.096188  267704 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:31:49.096202  267704 certs.go:257] generating profile certs ...
	I1026 08:31:49.096349  267704 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/client.key
	I1026 08:31:49.096417  267704 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/apiserver.key.c3d3ac60
	I1026 08:31:49.096482  267704 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/proxy-client.key
	I1026 08:31:49.096629  267704 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:31:49.096691  267704 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:31:49.096706  267704 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:31:49.096737  267704 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:31:49.096764  267704 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:31:49.096790  267704 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:31:49.096844  267704 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:31:49.097622  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:31:49.120602  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:31:49.140480  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:31:49.160397  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:31:49.179060  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1026 08:31:49.198529  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 08:31:49.221653  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:31:49.245688  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 08:31:49.264023  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:31:49.282388  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:31:49.300549  267704 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:31:49.319630  267704 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:31:49.334179  267704 ssh_runner.go:195] Run: openssl version
	I1026 08:31:49.341666  267704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:31:49.354640  267704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:31:49.358850  267704 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:31:49.358898  267704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:31:49.405415  267704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:31:49.419928  267704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:31:49.432277  267704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:31:49.437427  267704 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:31:49.437484  267704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:31:49.482717  267704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:31:49.493745  267704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:31:49.506496  267704 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:31:49.511850  267704 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:31:49.511910  267704 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:31:49.560045  267704 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:31:49.570992  267704 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:31:49.576688  267704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:31:49.626585  267704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:31:49.669572  267704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:31:49.721455  267704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:31:49.780645  267704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:31:49.837537  267704 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:31:49.883659  267704 kubeadm.go:400] StartCluster: {Name:kubernetes-upgrade-462840 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-462840 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:31:49.883766  267704 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:31:49.883816  267704 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:31:49.917946  267704 cri.go:89] found id: "4958a648ca295fce045a1f667d2209d50dc0bf2e0afcf9fad18a9bce7c7f307d"
	I1026 08:31:49.917976  267704 cri.go:89] found id: "bb7bb0d859a2fbc17e1dd579b8d7db4e25f9b0dd1bc20a0f55eca5503d9fcb25"
	I1026 08:31:49.917982  267704 cri.go:89] found id: "24fb5764a0e871409e2a79e7098c50ad4eb0d6db30828462b1031015b199f93c"
	I1026 08:31:49.917988  267704 cri.go:89] found id: "c7e201bdca932a6517bdd17f9eb30c897f35b70661dd5e16ba3539766dc0e1e5"
	I1026 08:31:49.917992  267704 cri.go:89] found id: "e611ab96a9560f1e8088fa6b386ecffa80247a292f6b73f0e38ab684c230dfec"
	I1026 08:31:49.917997  267704 cri.go:89] found id: "9c665e7d409350177c05a172f737a8a08f49cc94100fe98bfad5b44e0662a4b7"
	I1026 08:31:49.918002  267704 cri.go:89] found id: ""
	I1026 08:31:49.918048  267704 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 08:31:49.930447  267704 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:31:49Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:31:49.930502  267704 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:31:49.939372  267704 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:31:49.939396  267704 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:31:49.939446  267704 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:31:49.947647  267704 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:31:49.948861  267704 kubeconfig.go:125] found "kubernetes-upgrade-462840" server: "https://192.168.85.2:8443"
	I1026 08:31:49.950759  267704 kapi.go:59] client config for kubernetes-upgrade-462840: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/client.key", CAFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:31:49.951263  267704 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1026 08:31:49.951285  267704 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1026 08:31:49.951292  267704 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1026 08:31:49.951299  267704 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1026 08:31:49.951308  267704 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1026 08:31:49.951684  267704 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:31:49.960343  267704 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.85.2
	I1026 08:31:49.960371  267704 kubeadm.go:601] duration metric: took 20.970125ms to restartPrimaryControlPlane
	I1026 08:31:49.960378  267704 kubeadm.go:402] duration metric: took 76.731552ms to StartCluster
	I1026 08:31:49.960391  267704 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:49.960447  267704 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:31:49.961803  267704 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:49.962039  267704 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:31:49.962111  267704 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:31:49.962209  267704 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-462840"
	I1026 08:31:49.962229  267704 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-462840"
	W1026 08:31:49.962238  267704 addons.go:247] addon storage-provisioner should already be in state true
	I1026 08:31:49.962237  267704 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-462840"
	I1026 08:31:49.962276  267704 host.go:66] Checking if "kubernetes-upgrade-462840" exists ...
	I1026 08:31:49.962286  267704 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-462840"
	I1026 08:31:49.962297  267704 config.go:182] Loaded profile config "kubernetes-upgrade-462840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:49.962630  267704 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-462840 --format={{.State.Status}}
	I1026 08:31:49.962749  267704 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-462840 --format={{.State.Status}}
	I1026 08:31:49.963556  267704 out.go:179] * Verifying Kubernetes components...
	I1026 08:31:49.964681  267704 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:31:49.986971  267704 kapi.go:59] client config for kubernetes-upgrade-462840: &rest.Config{Host:"https://192.168.85.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/client.crt", KeyFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/client.key", CAFile:"/home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CA
Data:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x281c540), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1026 08:31:49.987301  267704 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-462840"
	W1026 08:31:49.987322  267704 addons.go:247] addon default-storageclass should already be in state true
	I1026 08:31:49.987352  267704 host.go:66] Checking if "kubernetes-upgrade-462840" exists ...
	I1026 08:31:49.987796  267704 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-462840 --format={{.State.Status}}
	I1026 08:31:49.988026  267704 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:31:49.989337  267704 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:31:49.989355  267704 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:31:49.989410  267704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-462840
	I1026 08:31:50.015474  267704 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:31:50.015504  267704 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:31:50.015567  267704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-462840
	I1026 08:31:50.019449  267704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/kubernetes-upgrade-462840/id_rsa Username:docker}
	I1026 08:31:50.036894  267704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33008 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/kubernetes-upgrade-462840/id_rsa Username:docker}
	I1026 08:31:50.105575  267704 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:31:50.118884  267704 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:31:50.118937  267704 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:31:50.131625  267704 api_server.go:72] duration metric: took 169.554331ms to wait for apiserver process to appear ...
	I1026 08:31:50.131652  267704 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:31:50.131674  267704 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:31:50.133398  267704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:31:50.137416  267704 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 08:31:50.144074  267704 api_server.go:141] control plane version: v1.34.1
	I1026 08:31:50.144662  267704 api_server.go:131] duration metric: took 12.997968ms to wait for apiserver health ...
	I1026 08:31:50.144693  267704 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:31:50.148903  267704 system_pods.go:59] 9 kube-system pods found
	I1026 08:31:50.148936  267704 system_pods.go:61] "coredns-66bc5c9577-9h2k4" [f8718e4d-e6cf-4256-bdf2-0747ec469ff2] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 08:31:50.148946  267704 system_pods.go:61] "coredns-66bc5c9577-fph9s" [d2bdb1f3-058c-48f3-9abc-54cfe3efcf41] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 08:31:50.148957  267704 system_pods.go:61] "etcd-kubernetes-upgrade-462840" [c6b250e2-a70d-4d93-acbc-e4489044f4f7] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:31:50.148964  267704 system_pods.go:61] "kindnet-9lnrs" [9fb20ad4-47c4-4019-9439-cfe47da44aad] Running
	I1026 08:31:50.148971  267704 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-462840" [10917837-dfc4-4d59-af5f-c695193dc381] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:31:50.148978  267704 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-462840" [a0663497-2426-4702-b659-1622726c05ef] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:31:50.148982  267704 system_pods.go:61] "kube-proxy-rrc4b" [e0acce76-266e-4649-942e-9cc6941f5a14] Running
	I1026 08:31:50.148997  267704 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-462840" [0a5a321c-2fb6-4ca9-b787-22f8a1300cbc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:31:50.149003  267704 system_pods.go:61] "storage-provisioner" [e99eac4e-01da-4a3c-a95a-6a26471f64bf] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 08:31:50.149010  267704 system_pods.go:74] duration metric: took 4.310323ms to wait for pod list to return data ...
	I1026 08:31:50.149021  267704 kubeadm.go:586] duration metric: took 186.95841ms to wait for: map[apiserver:true system_pods:true]
	I1026 08:31:50.149034  267704 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:31:50.151870  267704 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:31:50.151895  267704 node_conditions.go:123] node cpu capacity is 8
	I1026 08:31:50.151909  267704 node_conditions.go:105] duration metric: took 2.869596ms to run NodePressure ...
	I1026 08:31:50.151923  267704 start.go:241] waiting for startup goroutines ...
	I1026 08:31:50.159786  267704 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:31:50.625501  267704 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 08:31:50.626837  267704 addons.go:514] duration metric: took 664.733912ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 08:31:50.626876  267704 start.go:246] waiting for cluster config update ...
	I1026 08:31:50.626891  267704 start.go:255] writing updated cluster config ...
	I1026 08:31:50.627111  267704 ssh_runner.go:195] Run: rm -f paused
	I1026 08:31:50.676850  267704 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:31:50.678804  267704 out.go:179] * Done! kubectl is now configured to use "kubernetes-upgrade-462840" cluster and "default" namespace by default
	W1026 08:31:46.983432  258469 pod_ready.go:104] pod "coredns-66bc5c9577-jktn8" is not "Ready", error: <nil>
	W1026 08:31:48.988227  258469 pod_ready.go:104] pod "coredns-66bc5c9577-jktn8" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Oct 26 08:31:12 no-preload-001983 crio[572]: time="2025-10-26T08:31:12.720069496Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 08:31:12 no-preload-001983 crio[572]: time="2025-10-26T08:31:12.723847212Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:31:12 no-preload-001983 crio[572]: time="2025-10-26T08:31:12.723871321Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.814284823Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c8ca23ab-1242-4f57-85aa-0af9b0e561c9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.817287491Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f8ba299a-b5ac-47b5-a467-b669e3c769de name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.820412193Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45/dashboard-metrics-scraper" id=90d3d985-c092-4a2c-9488-432b92a7d8df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.820543646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.82765735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.828110187Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.859859501Z" level=info msg="Created container a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45/dashboard-metrics-scraper" id=90d3d985-c092-4a2c-9488-432b92a7d8df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.860539487Z" level=info msg="Starting container: a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64" id=ceaaefa0-750a-4d8f-a9de-77e38bf039dd name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.862414106Z" level=info msg="Started container" PID=1758 containerID=a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45/dashboard-metrics-scraper id=ceaaefa0-750a-4d8f-a9de-77e38bf039dd name=/runtime.v1.RuntimeService/StartContainer sandboxID=50854f2308022d620934d74770c1a63e2bc85fec2dec4cf847777a5d8aec3b2a
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.91953654Z" level=info msg="Removing container: 4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709" id=aa25c7ec-3de7-4150-b814-425d945b3557 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.931402497Z" level=info msg="Removed container 4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45/dashboard-metrics-scraper" id=aa25c7ec-3de7-4150-b814-425d945b3557 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.938546869Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0710adda-b8b2-468e-86f2-c6a70c4bca53 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.953858846Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=494c26a5-d7ed-4d8b-86c0-6a30de540f04 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.954893676Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7754375f-bb3f-4f7d-b56e-ee634a7cfb0a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.955040778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.987694929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.987911517Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8b0837e0acbb301908096da6a49344336cc98534254025b0133b6928926faf06/merged/etc/passwd: no such file or directory"
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.987952572Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8b0837e0acbb301908096da6a49344336cc98534254025b0133b6928926faf06/merged/etc/group: no such file or directory"
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.989469904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:33 no-preload-001983 crio[572]: time="2025-10-26T08:31:33.031618532Z" level=info msg="Created container 09a9bd3e1f32e9950c69d47307d8f5caef265ec9351e995a734464184843e075: kube-system/storage-provisioner/storage-provisioner" id=7754375f-bb3f-4f7d-b56e-ee634a7cfb0a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:33 no-preload-001983 crio[572]: time="2025-10-26T08:31:33.032580028Z" level=info msg="Starting container: 09a9bd3e1f32e9950c69d47307d8f5caef265ec9351e995a734464184843e075" id=140a4d02-004d-4467-8ba4-2595cd8205f1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:31:33 no-preload-001983 crio[572]: time="2025-10-26T08:31:33.035136839Z" level=info msg="Started container" PID=1772 containerID=09a9bd3e1f32e9950c69d47307d8f5caef265ec9351e995a734464184843e075 description=kube-system/storage-provisioner/storage-provisioner id=140a4d02-004d-4467-8ba4-2595cd8205f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=666e5a550ed7ca2a7162a7f30d58b082e5905051bc6736e05b1b54dc95a93298
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	09a9bd3e1f32e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           20 seconds ago      Running             storage-provisioner         1                   666e5a550ed7c       storage-provisioner                          kube-system
	a81919b8384b0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           26 seconds ago      Exited              dashboard-metrics-scraper   2                   50854f2308022       dashboard-metrics-scraper-6ffb444bf9-xps45   kubernetes-dashboard
	afc141c8a034d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   df0d255030564       kubernetes-dashboard-855c9754f9-48znz        kubernetes-dashboard
	d67644d288b5b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           51 seconds ago      Running             coredns                     0                   d1013077e84e5       coredns-66bc5c9577-p5nmq                     kube-system
	04a954300591c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           51 seconds ago      Running             busybox                     1                   b5dc7b5179471       busybox                                      default
	ad1aac48cb866       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           51 seconds ago      Exited              storage-provisioner         0                   666e5a550ed7c       storage-provisioner                          kube-system
	529f576c97f1a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           51 seconds ago      Running             kindnet-cni                 0                   c491ab1ec78f1       kindnet-8lrm6                                kube-system
	2da6a31b2b449       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           51 seconds ago      Running             kube-proxy                  0                   78c01bfae4a4d       kube-proxy-xpz59                             kube-system
	4c584459a8b9c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   7dcd572e130b8       kube-controller-manager-no-preload-001983    kube-system
	895b68d06c4c8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   8998e16fbdcc5       kube-scheduler-no-preload-001983             kube-system
	9af37f96ad50d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   5c480a6033ae2       kube-apiserver-no-preload-001983             kube-system
	09efe5a8a887a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   5f0258e221fa1       etcd-no-preload-001983                       kube-system
	
	
	==> coredns [d67644d288b5ba279cc7cb9b7107221732bfb60312f8f31fd45cd95cfef849ae] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60118 - 40693 "HINFO IN 4165625567285328766.644999149996727103. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.018404255s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-001983
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-001983
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=no-preload-001983
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_30_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:30:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-001983
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:31:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:31:32 +0000   Sun, 26 Oct 2025 08:29:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:31:32 +0000   Sun, 26 Oct 2025 08:29:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:31:32 +0000   Sun, 26 Oct 2025 08:29:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:31:32 +0000   Sun, 26 Oct 2025 08:31:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-001983
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                0d1d1615-c76d-4158-8917-674a566b71fc
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-p5nmq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-no-preload-001983                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-8lrm6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      104s
	  kube-system                 kube-apiserver-no-preload-001983              250m (3%)     0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-controller-manager-no-preload-001983     200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-xpz59                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kube-system                 kube-scheduler-no-preload-001983              100m (1%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         102s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-xps45    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-48znz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 102s                 kube-proxy       
	  Normal  Starting                 51s                  kube-proxy       
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node no-preload-001983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node no-preload-001983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)  kubelet          Node no-preload-001983 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    108s                 kubelet          Node no-preload-001983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     108s                 kubelet          Node no-preload-001983 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  108s                 kubelet          Node no-preload-001983 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           104s                 node-controller  Node no-preload-001983 event: Registered Node no-preload-001983 in Controller
	  Normal  NodeReady                91s                  kubelet          Node no-preload-001983 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node no-preload-001983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node no-preload-001983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node no-preload-001983 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                  node-controller  Node no-preload-001983 event: Registered Node no-preload-001983 in Controller
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [09efe5a8a887a3172db87ced2e163334f36f6661f8d12e7e6ad96c8dd5c8fdeb] <==
	{"level":"warn","ts":"2025-10-26T08:31:00.816687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.833749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.840062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.846231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.852586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.858872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.865927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.872732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.884384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.890502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.898081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.906174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.913053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.919420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.926118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.933732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.939939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.946600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.953551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.960531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.989193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.996104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:01.042190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T08:31:32.205416Z","caller":"traceutil/trace.go:172","msg":"trace[371660045] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"101.542771ms","start":"2025-10-26T08:31:32.103848Z","end":"2025-10-26T08:31:32.205390Z","steps":["trace[371660045] 'process raft request'  (duration: 101.316334ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:31:32.960156Z","caller":"traceutil/trace.go:172","msg":"trace[1929247081] transaction","detail":"{read_only:false; response_revision:656; number_of_response:1; }","duration":"131.598098ms","start":"2025-10-26T08:31:32.828536Z","end":"2025-10-26T08:31:32.960134Z","steps":["trace[1929247081] 'process raft request'  (duration: 125.243073ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:31:53 up  1:14,  0 user,  load average: 4.14, 3.35, 2.11
	Linux no-preload-001983 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [529f576c97f1ad3986c3ed57f6c2cfc78ae1d8f80bb553d59fbb6bfbe2e05dee] <==
	I1026 08:31:02.304115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:31:02.304418       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 08:31:02.304555       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:31:02.304570       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:31:02.304590       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:31:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:31:02.698522       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:31:02.698581       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:31:02.698597       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:31:02.699042       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:31:03.000076       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:31:03.000114       1 metrics.go:72] Registering metrics
	I1026 08:31:03.000190       1 controller.go:711] "Syncing nftables rules"
	I1026 08:31:12.698700       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 08:31:12.698786       1 main.go:301] handling current node
	I1026 08:31:22.698461       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 08:31:22.698510       1 main.go:301] handling current node
	I1026 08:31:32.698621       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 08:31:32.698663       1 main.go:301] handling current node
	I1026 08:31:42.698829       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 08:31:42.698873       1 main.go:301] handling current node
	I1026 08:31:52.698470       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 08:31:52.698513       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9af37f96ad50d506190b4c623adb174e57b1595cf1697f17669021e88201d00e] <==
	I1026 08:31:01.501614       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:31:01.501167       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 08:31:01.501941       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 08:31:01.502674       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 08:31:01.506437       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 08:31:01.521338       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 08:31:01.521330       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 08:31:01.521421       1 policy_source.go:240] refreshing policies
	I1026 08:31:01.521421       1 aggregator.go:171] initial CRD sync complete...
	I1026 08:31:01.521439       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 08:31:01.521448       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 08:31:01.521454       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:31:01.527212       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:31:01.529110       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 08:31:01.748282       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 08:31:01.777538       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:31:01.798412       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:31:01.806349       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:31:01.812762       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:31:01.847912       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.159.87"}
	I1026 08:31:01.858121       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.208.4"}
	I1026 08:31:02.404219       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:31:05.017049       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:31:05.265784       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:31:05.415812       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4c584459a8b9ceee81272b11057c6992b6445414d13db7978d48dece06c956e1] <==
	I1026 08:31:04.838656       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 08:31:04.842941       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 08:31:04.845298       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 08:31:04.847505       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 08:31:04.848946       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 08:31:04.851335       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 08:31:04.861856       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 08:31:04.861909       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 08:31:04.862387       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 08:31:04.863356       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 08:31:04.863430       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 08:31:04.863553       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 08:31:04.863566       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-001983"
	I1026 08:31:04.863614       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 08:31:04.863636       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:31:04.863643       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 08:31:04.863691       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 08:31:04.865032       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 08:31:04.867164       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 08:31:04.868337       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:31:04.869541       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 08:31:04.869572       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 08:31:04.871820       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 08:31:04.874177       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 08:31:04.889490       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2da6a31b2b449b76a6fad36286c9ef2883f28efd94b9fc8093f8f1dc49c00f7e] <==
	I1026 08:31:02.200291       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:31:02.264373       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:31:02.365540       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:31:02.365578       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 08:31:02.365659       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:31:02.383960       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:31:02.384014       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:31:02.389316       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:31:02.389710       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:31:02.389749       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:31:02.391374       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:31:02.391394       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:31:02.391426       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:31:02.391422       1 config.go:200] "Starting service config controller"
	I1026 08:31:02.391444       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:31:02.391433       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:31:02.391488       1 config.go:309] "Starting node config controller"
	I1026 08:31:02.391497       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:31:02.491640       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:31:02.491662       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:31:02.491671       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 08:31:02.491690       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [895b68d06c4c842bc1c2cab1766e76fb423dcd76ef2a2caa87c3d26070e83456] <==
	I1026 08:31:00.206993       1 serving.go:386] Generated self-signed cert in-memory
	I1026 08:31:01.471698       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 08:31:01.471729       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:31:01.477317       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 08:31:01.477323       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:31:01.477353       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:31:01.477358       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 08:31:01.477346       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 08:31:01.477390       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 08:31:01.477806       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 08:31:01.477875       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:31:01.577684       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 08:31:01.577711       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 08:31:01.577661       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:31:05 no-preload-001983 kubelet[721]: I1026 08:31:05.550955     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t47lz\" (UniqueName: \"kubernetes.io/projected/390e2ecb-697d-4556-824a-09e99b456a1a-kube-api-access-t47lz\") pod \"kubernetes-dashboard-855c9754f9-48znz\" (UID: \"390e2ecb-697d-4556-824a-09e99b456a1a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-48znz"
	Oct 26 08:31:05 no-preload-001983 kubelet[721]: I1026 08:31:05.551042     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/390e2ecb-697d-4556-824a-09e99b456a1a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-48znz\" (UID: \"390e2ecb-697d-4556-824a-09e99b456a1a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-48znz"
	Oct 26 08:31:06 no-preload-001983 kubelet[721]: I1026 08:31:06.273068     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 08:31:08 no-preload-001983 kubelet[721]: I1026 08:31:08.860635     721 scope.go:117] "RemoveContainer" containerID="993b65361423122c727dca7e516c6dcd43e606047619f2894f832bababbe9234"
	Oct 26 08:31:09 no-preload-001983 kubelet[721]: I1026 08:31:09.867592     721 scope.go:117] "RemoveContainer" containerID="993b65361423122c727dca7e516c6dcd43e606047619f2894f832bababbe9234"
	Oct 26 08:31:09 no-preload-001983 kubelet[721]: I1026 08:31:09.867632     721 scope.go:117] "RemoveContainer" containerID="4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709"
	Oct 26 08:31:09 no-preload-001983 kubelet[721]: E1026 08:31:09.867830     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xps45_kubernetes-dashboard(06a7ef0f-ce83-4570-a3d8-125c827c1c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45" podUID="06a7ef0f-ce83-4570-a3d8-125c827c1c3c"
	Oct 26 08:31:10 no-preload-001983 kubelet[721]: I1026 08:31:10.869891     721 scope.go:117] "RemoveContainer" containerID="4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709"
	Oct 26 08:31:10 no-preload-001983 kubelet[721]: E1026 08:31:10.870083     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xps45_kubernetes-dashboard(06a7ef0f-ce83-4570-a3d8-125c827c1c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45" podUID="06a7ef0f-ce83-4570-a3d8-125c827c1c3c"
	Oct 26 08:31:11 no-preload-001983 kubelet[721]: I1026 08:31:11.890057     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-48znz" podStartSLOduration=1.068496609 podStartE2EDuration="6.890032218s" podCreationTimestamp="2025-10-26 08:31:05 +0000 UTC" firstStartedPulling="2025-10-26 08:31:05.834452095 +0000 UTC m=+7.110743816" lastFinishedPulling="2025-10-26 08:31:11.655987701 +0000 UTC m=+12.932279425" observedRunningTime="2025-10-26 08:31:11.889655603 +0000 UTC m=+13.165947342" watchObservedRunningTime="2025-10-26 08:31:11.890032218 +0000 UTC m=+13.166323954"
	Oct 26 08:31:13 no-preload-001983 kubelet[721]: I1026 08:31:13.373752     721 scope.go:117] "RemoveContainer" containerID="4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709"
	Oct 26 08:31:13 no-preload-001983 kubelet[721]: E1026 08:31:13.373973     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xps45_kubernetes-dashboard(06a7ef0f-ce83-4570-a3d8-125c827c1c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45" podUID="06a7ef0f-ce83-4570-a3d8-125c827c1c3c"
	Oct 26 08:31:26 no-preload-001983 kubelet[721]: I1026 08:31:26.813742     721 scope.go:117] "RemoveContainer" containerID="4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709"
	Oct 26 08:31:26 no-preload-001983 kubelet[721]: I1026 08:31:26.918176     721 scope.go:117] "RemoveContainer" containerID="4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709"
	Oct 26 08:31:26 no-preload-001983 kubelet[721]: I1026 08:31:26.918421     721 scope.go:117] "RemoveContainer" containerID="a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64"
	Oct 26 08:31:26 no-preload-001983 kubelet[721]: E1026 08:31:26.918639     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xps45_kubernetes-dashboard(06a7ef0f-ce83-4570-a3d8-125c827c1c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45" podUID="06a7ef0f-ce83-4570-a3d8-125c827c1c3c"
	Oct 26 08:31:32 no-preload-001983 kubelet[721]: I1026 08:31:32.938120     721 scope.go:117] "RemoveContainer" containerID="ad1aac48cb866a8a429a901092cbedec57ca9cb5db6edd6939b3c2894e0dda25"
	Oct 26 08:31:33 no-preload-001983 kubelet[721]: I1026 08:31:33.374298     721 scope.go:117] "RemoveContainer" containerID="a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64"
	Oct 26 08:31:33 no-preload-001983 kubelet[721]: E1026 08:31:33.374525     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xps45_kubernetes-dashboard(06a7ef0f-ce83-4570-a3d8-125c827c1c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45" podUID="06a7ef0f-ce83-4570-a3d8-125c827c1c3c"
	Oct 26 08:31:46 no-preload-001983 kubelet[721]: I1026 08:31:46.813110     721 scope.go:117] "RemoveContainer" containerID="a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64"
	Oct 26 08:31:46 no-preload-001983 kubelet[721]: E1026 08:31:46.813387     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xps45_kubernetes-dashboard(06a7ef0f-ce83-4570-a3d8-125c827c1c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45" podUID="06a7ef0f-ce83-4570-a3d8-125c827c1c3c"
	Oct 26 08:31:50 no-preload-001983 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 08:31:50 no-preload-001983 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 08:31:50 no-preload-001983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 08:31:50 no-preload-001983 systemd[1]: kubelet.service: Consumed 1.698s CPU time.
	
	
	==> kubernetes-dashboard [afc141c8a034d2f7011113758f37bcb772b61ac78d0e9bfaaacce15188956d75] <==
	2025/10/26 08:31:11 Starting overwatch
	2025/10/26 08:31:11 Using namespace: kubernetes-dashboard
	2025/10/26 08:31:11 Using in-cluster config to connect to apiserver
	2025/10/26 08:31:11 Using secret token for csrf signing
	2025/10/26 08:31:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 08:31:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 08:31:11 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 08:31:11 Generating JWE encryption key
	2025/10/26 08:31:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 08:31:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 08:31:12 Initializing JWE encryption key from synchronized object
	2025/10/26 08:31:12 Creating in-cluster Sidecar client
	2025/10/26 08:31:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:31:12 Serving insecurely on HTTP port: 9090
	2025/10/26 08:31:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [09a9bd3e1f32e9950c69d47307d8f5caef265ec9351e995a734464184843e075] <==
	I1026 08:31:33.051452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 08:31:33.060157       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 08:31:33.060212       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 08:31:33.064059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:36.519227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:40.779850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:44.379612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:47.436224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:50.459001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:50.464848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:31:50.465013       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 08:31:50.465167       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b8f114b-3680-4914-b270-3b66442ba435", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-001983_67b86529-1dcf-43a1-bb5a-2826f60bfb34 became leader
	I1026 08:31:50.465198       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-001983_67b86529-1dcf-43a1-bb5a-2826f60bfb34!
	W1026 08:31:50.467646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:50.471734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:31:50.566010       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-001983_67b86529-1dcf-43a1-bb5a-2826f60bfb34!
	W1026 08:31:52.475536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:52.480217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ad1aac48cb866a8a429a901092cbedec57ca9cb5db6edd6939b3c2894e0dda25] <==
	I1026 08:31:02.178792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 08:31:32.181687       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-001983 -n no-preload-001983
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-001983 -n no-preload-001983: exit status 2 (491.184206ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-001983 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-001983
helpers_test.go:243: (dbg) docker inspect no-preload-001983:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6",
	        "Created": "2025-10-26T08:29:35.306793049Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 255621,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:30:52.742066608Z",
	            "FinishedAt": "2025-10-26T08:30:51.415359556Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6/hostname",
	        "HostsPath": "/var/lib/docker/containers/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6/hosts",
	        "LogPath": "/var/lib/docker/containers/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6/1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6-json.log",
	        "Name": "/no-preload-001983",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-001983:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-001983",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1c02a726554994cd776d658b493bdd561aa361a6448c5a3630f23fba852a0af6",
	                "LowerDir": "/var/lib/docker/overlay2/635c7ae8fdcb97ab370d4b345349b0cab3ee9a001eb19ea34208ab5ebca1fde4-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/635c7ae8fdcb97ab370d4b345349b0cab3ee9a001eb19ea34208ab5ebca1fde4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/635c7ae8fdcb97ab370d4b345349b0cab3ee9a001eb19ea34208ab5ebca1fde4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/635c7ae8fdcb97ab370d4b345349b0cab3ee9a001eb19ea34208ab5ebca1fde4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-001983",
	                "Source": "/var/lib/docker/volumes/no-preload-001983/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-001983",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-001983",
	                "name.minikube.sigs.k8s.io": "no-preload-001983",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2d47042a6342050fc62f7bf7b362650e5e9c06e1961e22ef8c0aa82c25f4ae2a",
	            "SandboxKey": "/var/run/docker/netns/2d47042a6342",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33073"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-001983": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:53:86:be:53:35",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "0bdb8ca3ba1ed8384cb0d6339c847a03d4b5a80b703fdd60e4df4eb3b0fbcff7",
	                    "EndpointID": "ed4446bd4dfc868a4827b00f11da8dedfb97fd1ba07c8fba824a3e14183c8419",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-001983",
	                        "1c02a7265549"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-001983 -n no-preload-001983
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-001983 -n no-preload-001983: exit status 2 (367.48431ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-001983 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-001983 logs -n 25: (1.25226663s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:29 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-810379 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p old-k8s-version-810379 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-810379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable metrics-server -p no-preload-001983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p no-preload-001983 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable metrics-server -p embed-certs-752315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p embed-certs-752315 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable dashboard -p no-preload-001983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-752315 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ image   │ old-k8s-version-810379 image list --format=json                                                                                                                                                                                               │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p old-k8s-version-810379 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p old-k8s-version-810379                                                                                                                                                                                                                     │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ delete  │ -p old-k8s-version-810379                                                                                                                                                                                                                     │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ delete  │ -p disable-driver-mounts-209240                                                                                                                                                                                                               │ disable-driver-mounts-209240 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p default-k8s-diff-port-866212 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ image   │ no-preload-001983 image list --format=json                                                                                                                                                                                                    │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p no-preload-001983 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-462840                                                                                                                                                                                                                  │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:31:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:31:53.293149  270203 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:31:53.293492  270203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:53.293504  270203 out.go:374] Setting ErrFile to fd 2...
	I1026 08:31:53.293508  270203 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:31:53.293739  270203 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:31:53.294280  270203 out.go:368] Setting JSON to false
	I1026 08:31:53.295719  270203 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4464,"bootTime":1761463049,"procs":350,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:31:53.295825  270203 start.go:141] virtualization: kvm guest
	I1026 08:31:53.297983  270203 out.go:179] * [newest-cni-366970] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:31:53.300031  270203 notify.go:220] Checking for updates...
	I1026 08:31:53.300114  270203 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:31:53.301561  270203 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:31:53.303277  270203 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:31:53.304586  270203 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:31:53.306010  270203 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:31:53.307361  270203 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:31:53.309455  270203 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:53.309563  270203 config.go:182] Loaded profile config "embed-certs-752315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:53.309696  270203 config.go:182] Loaded profile config "no-preload-001983": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:53.309795  270203 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:31:53.337184  270203 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:31:53.337305  270203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:31:53.407084  270203 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:31:53.395010939 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:31:53.407227  270203 docker.go:318] overlay module found
	I1026 08:31:53.409049  270203 out.go:179] * Using the docker driver based on user configuration
	I1026 08:31:53.410379  270203 start.go:305] selected driver: docker
	I1026 08:31:53.410396  270203 start.go:925] validating driver "docker" against <nil>
	I1026 08:31:53.410410  270203 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:31:53.411235  270203 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:31:53.481738  270203 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:31:53.4692967 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map
[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:31:53.481987  270203 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1026 08:31:53.482020  270203 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1026 08:31:53.482357  270203 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 08:31:53.484581  270203 out.go:179] * Using Docker driver with root privileges
	I1026 08:31:53.485861  270203 cni.go:84] Creating CNI manager for ""
	I1026 08:31:53.485948  270203 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:31:53.485960  270203 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 08:31:53.486032  270203 start.go:349] cluster config:
	{Name:newest-cni-366970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-366970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:31:53.487500  270203 out.go:179] * Starting "newest-cni-366970" primary control-plane node in "newest-cni-366970" cluster
	I1026 08:31:53.488772  270203 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:31:53.490164  270203 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:31:53.491446  270203 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:31:53.491491  270203 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:31:53.491533  270203 cache.go:58] Caching tarball of preloaded images
	I1026 08:31:53.491570  270203 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:31:53.491624  270203 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:31:53.491639  270203 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:31:53.491742  270203 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/config.json ...
	I1026 08:31:53.491761  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/config.json: {Name:mkb972b4f22b5d40ea29a262f2c3d55a1ad4a9df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:53.516838  270203 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:31:53.516862  270203 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:31:53.516881  270203 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:31:53.516910  270203 start.go:360] acquireMachinesLock for newest-cni-366970: {Name:mk148c515095ce3faeaa74a2b6e6c65e43915fbe Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:31:53.517021  270203 start.go:364] duration metric: took 89.398µs to acquireMachinesLock for "newest-cni-366970"
	I1026 08:31:53.517052  270203 start.go:93] Provisioning new machine with config: &{Name:newest-cni-366970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-366970 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:31:53.517135  270203 start.go:125] createHost starting for "" (driver="docker")
	I1026 08:31:53.329230  264509 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:31:53.829695  264509 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:31:53.917007  264509 kubeadm.go:1113] duration metric: took 4.199687007s to wait for elevateKubeSystemPrivileges
	I1026 08:31:53.917062  264509 kubeadm.go:402] duration metric: took 16.02869902s to StartCluster
	I1026 08:31:53.917086  264509 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:53.917158  264509 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:31:53.919546  264509 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:31:53.919843  264509 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 08:31:53.919863  264509 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:31:53.919837  264509 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:31:53.919943  264509 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-866212"
	I1026 08:31:53.919962  264509 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-866212"
	I1026 08:31:53.920059  264509 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-866212"
	I1026 08:31:53.920081  264509 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-866212"
	I1026 08:31:53.919986  264509 host.go:66] Checking if "default-k8s-diff-port-866212" exists ...
	I1026 08:31:53.920205  264509 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:31:53.920564  264509 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-866212 --format={{.State.Status}}
	I1026 08:31:53.920704  264509 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-866212 --format={{.State.Status}}
	I1026 08:31:53.928028  264509 out.go:179] * Verifying Kubernetes components...
	I1026 08:31:53.929879  264509 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:31:53.950151  264509 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:31:53.950692  264509 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-866212"
	I1026 08:31:53.950740  264509 host.go:66] Checking if "default-k8s-diff-port-866212" exists ...
	I1026 08:31:53.951189  264509 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-866212 --format={{.State.Status}}
	I1026 08:31:53.954924  264509 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:31:53.954944  264509 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:31:53.955015  264509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:31:53.990602  264509 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:31:53.990676  264509 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:31:53.990744  264509 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:31:53.992769  264509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/default-k8s-diff-port-866212/id_rsa Username:docker}
	I1026 08:31:54.031562  264509 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33083 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/default-k8s-diff-port-866212/id_rsa Username:docker}
	I1026 08:31:54.058231  264509 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 08:31:54.137538  264509 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:31:54.164666  264509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:31:54.196458  264509 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:31:54.280927  264509 start.go:976] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1026 08:31:54.282565  264509 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-866212" to be "Ready" ...
	I1026 08:31:54.563420  264509 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	
	
	==> CRI-O <==
	Oct 26 08:31:12 no-preload-001983 crio[572]: time="2025-10-26T08:31:12.720069496Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Oct 26 08:31:12 no-preload-001983 crio[572]: time="2025-10-26T08:31:12.723847212Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Oct 26 08:31:12 no-preload-001983 crio[572]: time="2025-10-26T08:31:12.723871321Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.814284823Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c8ca23ab-1242-4f57-85aa-0af9b0e561c9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.817287491Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f8ba299a-b5ac-47b5-a467-b669e3c769de name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.820412193Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45/dashboard-metrics-scraper" id=90d3d985-c092-4a2c-9488-432b92a7d8df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.820543646Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.82765735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.828110187Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.859859501Z" level=info msg="Created container a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45/dashboard-metrics-scraper" id=90d3d985-c092-4a2c-9488-432b92a7d8df name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.860539487Z" level=info msg="Starting container: a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64" id=ceaaefa0-750a-4d8f-a9de-77e38bf039dd name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.862414106Z" level=info msg="Started container" PID=1758 containerID=a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45/dashboard-metrics-scraper id=ceaaefa0-750a-4d8f-a9de-77e38bf039dd name=/runtime.v1.RuntimeService/StartContainer sandboxID=50854f2308022d620934d74770c1a63e2bc85fec2dec4cf847777a5d8aec3b2a
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.91953654Z" level=info msg="Removing container: 4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709" id=aa25c7ec-3de7-4150-b814-425d945b3557 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:26 no-preload-001983 crio[572]: time="2025-10-26T08:31:26.931402497Z" level=info msg="Removed container 4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45/dashboard-metrics-scraper" id=aa25c7ec-3de7-4150-b814-425d945b3557 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.938546869Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0710adda-b8b2-468e-86f2-c6a70c4bca53 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.953858846Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=494c26a5-d7ed-4d8b-86c0-6a30de540f04 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.954893676Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7754375f-bb3f-4f7d-b56e-ee634a7cfb0a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.955040778Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.987694929Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.987911517Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8b0837e0acbb301908096da6a49344336cc98534254025b0133b6928926faf06/merged/etc/passwd: no such file or directory"
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.987952572Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8b0837e0acbb301908096da6a49344336cc98534254025b0133b6928926faf06/merged/etc/group: no such file or directory"
	Oct 26 08:31:32 no-preload-001983 crio[572]: time="2025-10-26T08:31:32.989469904Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:33 no-preload-001983 crio[572]: time="2025-10-26T08:31:33.031618532Z" level=info msg="Created container 09a9bd3e1f32e9950c69d47307d8f5caef265ec9351e995a734464184843e075: kube-system/storage-provisioner/storage-provisioner" id=7754375f-bb3f-4f7d-b56e-ee634a7cfb0a name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:33 no-preload-001983 crio[572]: time="2025-10-26T08:31:33.032580028Z" level=info msg="Starting container: 09a9bd3e1f32e9950c69d47307d8f5caef265ec9351e995a734464184843e075" id=140a4d02-004d-4467-8ba4-2595cd8205f1 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:31:33 no-preload-001983 crio[572]: time="2025-10-26T08:31:33.035136839Z" level=info msg="Started container" PID=1772 containerID=09a9bd3e1f32e9950c69d47307d8f5caef265ec9351e995a734464184843e075 description=kube-system/storage-provisioner/storage-provisioner id=140a4d02-004d-4467-8ba4-2595cd8205f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=666e5a550ed7ca2a7162a7f30d58b082e5905051bc6736e05b1b54dc95a93298
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	09a9bd3e1f32e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   666e5a550ed7c       storage-provisioner                          kube-system
	a81919b8384b0       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   50854f2308022       dashboard-metrics-scraper-6ffb444bf9-xps45   kubernetes-dashboard
	afc141c8a034d       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   df0d255030564       kubernetes-dashboard-855c9754f9-48znz        kubernetes-dashboard
	d67644d288b5b       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   d1013077e84e5       coredns-66bc5c9577-p5nmq                     kube-system
	04a954300591c       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   b5dc7b5179471       busybox                                      default
	ad1aac48cb866       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   666e5a550ed7c       storage-provisioner                          kube-system
	529f576c97f1a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   c491ab1ec78f1       kindnet-8lrm6                                kube-system
	2da6a31b2b449       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   78c01bfae4a4d       kube-proxy-xpz59                             kube-system
	4c584459a8b9c       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   7dcd572e130b8       kube-controller-manager-no-preload-001983    kube-system
	895b68d06c4c8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   8998e16fbdcc5       kube-scheduler-no-preload-001983             kube-system
	9af37f96ad50d       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   5c480a6033ae2       kube-apiserver-no-preload-001983             kube-system
	09efe5a8a887a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   5f0258e221fa1       etcd-no-preload-001983                       kube-system
	
	
	==> coredns [d67644d288b5ba279cc7cb9b7107221732bfb60312f8f31fd45cd95cfef849ae] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:60118 - 40693 "HINFO IN 4165625567285328766.644999149996727103. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.018404255s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-001983
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-001983
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=no-preload-001983
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_30_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:30:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-001983
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:31:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:31:32 +0000   Sun, 26 Oct 2025 08:29:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:31:32 +0000   Sun, 26 Oct 2025 08:29:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:31:32 +0000   Sun, 26 Oct 2025 08:29:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:31:32 +0000   Sun, 26 Oct 2025 08:31:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-001983
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                0d1d1615-c76d-4158-8917-674a566b71fc
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-p5nmq                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     105s
	  kube-system                 etcd-no-preload-001983                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-8lrm6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-no-preload-001983              250m (3%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-controller-manager-no-preload-001983     200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-xpz59                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-no-preload-001983              100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         104s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-xps45    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-48znz         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 53s                  kube-proxy       
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node no-preload-001983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node no-preload-001983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 117s)  kubelet          Node no-preload-001983 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node no-preload-001983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node no-preload-001983 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node no-preload-001983 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           106s                 node-controller  Node no-preload-001983 event: Registered Node no-preload-001983 in Controller
	  Normal  NodeReady                93s                  kubelet          Node no-preload-001983 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node no-preload-001983 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node no-preload-001983 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node no-preload-001983 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node no-preload-001983 event: Registered Node no-preload-001983 in Controller
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [09efe5a8a887a3172db87ced2e163334f36f6661f8d12e7e6ad96c8dd5c8fdeb] <==
	{"level":"warn","ts":"2025-10-26T08:31:00.816687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.833749Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56034","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.840062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.846231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56082","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.852586Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.858872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.865927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.872732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.884384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.890502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.898081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.906174Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.913053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56254","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.919420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56272","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.926118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.933732Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.939939Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.946600Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.953551Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.960531Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56370","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.989193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:00.996104Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:01.042190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T08:31:32.205416Z","caller":"traceutil/trace.go:172","msg":"trace[371660045] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"101.542771ms","start":"2025-10-26T08:31:32.103848Z","end":"2025-10-26T08:31:32.205390Z","steps":["trace[371660045] 'process raft request'  (duration: 101.316334ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:31:32.960156Z","caller":"traceutil/trace.go:172","msg":"trace[1929247081] transaction","detail":"{read_only:false; response_revision:656; number_of_response:1; }","duration":"131.598098ms","start":"2025-10-26T08:31:32.828536Z","end":"2025-10-26T08:31:32.960134Z","steps":["trace[1929247081] 'process raft request'  (duration: 125.243073ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:31:55 up  1:14,  0 user,  load average: 4.93, 3.53, 2.18
	Linux no-preload-001983 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [529f576c97f1ad3986c3ed57f6c2cfc78ae1d8f80bb553d59fbb6bfbe2e05dee] <==
	I1026 08:31:02.304115       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:31:02.304418       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1026 08:31:02.304555       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:31:02.304570       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:31:02.304590       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:31:02Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:31:02.698522       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:31:02.698581       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:31:02.698597       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:31:02.699042       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:31:03.000076       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:31:03.000114       1 metrics.go:72] Registering metrics
	I1026 08:31:03.000190       1 controller.go:711] "Syncing nftables rules"
	I1026 08:31:12.698700       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 08:31:12.698786       1 main.go:301] handling current node
	I1026 08:31:22.698461       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 08:31:22.698510       1 main.go:301] handling current node
	I1026 08:31:32.698621       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 08:31:32.698663       1 main.go:301] handling current node
	I1026 08:31:42.698829       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 08:31:42.698873       1 main.go:301] handling current node
	I1026 08:31:52.698470       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1026 08:31:52.698513       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9af37f96ad50d506190b4c623adb174e57b1595cf1697f17669021e88201d00e] <==
	I1026 08:31:01.501614       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:31:01.501167       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 08:31:01.501941       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 08:31:01.502674       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 08:31:01.506437       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 08:31:01.521338       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 08:31:01.521330       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 08:31:01.521421       1 policy_source.go:240] refreshing policies
	I1026 08:31:01.521421       1 aggregator.go:171] initial CRD sync complete...
	I1026 08:31:01.521439       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 08:31:01.521448       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 08:31:01.521454       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:31:01.527212       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:31:01.529110       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 08:31:01.748282       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 08:31:01.777538       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:31:01.798412       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:31:01.806349       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:31:01.812762       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:31:01.847912       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.107.159.87"}
	I1026 08:31:01.858121       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.208.4"}
	I1026 08:31:02.404219       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:31:05.017049       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:31:05.265784       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:31:05.415812       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [4c584459a8b9ceee81272b11057c6992b6445414d13db7978d48dece06c956e1] <==
	I1026 08:31:04.838656       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 08:31:04.842941       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 08:31:04.845298       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 08:31:04.847505       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 08:31:04.848946       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 08:31:04.851335       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 08:31:04.861856       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 08:31:04.861909       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 08:31:04.862387       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 08:31:04.863356       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 08:31:04.863430       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 08:31:04.863553       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 08:31:04.863566       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-001983"
	I1026 08:31:04.863614       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 08:31:04.863636       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:31:04.863643       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 08:31:04.863691       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 08:31:04.865032       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 08:31:04.867164       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 08:31:04.868337       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:31:04.869541       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 08:31:04.869572       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 08:31:04.871820       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 08:31:04.874177       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 08:31:04.889490       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2da6a31b2b449b76a6fad36286c9ef2883f28efd94b9fc8093f8f1dc49c00f7e] <==
	I1026 08:31:02.200291       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:31:02.264373       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:31:02.365540       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:31:02.365578       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1026 08:31:02.365659       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:31:02.383960       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:31:02.384014       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:31:02.389316       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:31:02.389710       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:31:02.389749       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:31:02.391374       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:31:02.391394       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:31:02.391426       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:31:02.391422       1 config.go:200] "Starting service config controller"
	I1026 08:31:02.391444       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:31:02.391433       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:31:02.391488       1 config.go:309] "Starting node config controller"
	I1026 08:31:02.391497       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:31:02.491640       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:31:02.491662       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:31:02.491671       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 08:31:02.491690       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [895b68d06c4c842bc1c2cab1766e76fb423dcd76ef2a2caa87c3d26070e83456] <==
	I1026 08:31:00.206993       1 serving.go:386] Generated self-signed cert in-memory
	I1026 08:31:01.471698       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 08:31:01.471729       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:31:01.477317       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 08:31:01.477323       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:31:01.477353       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:31:01.477358       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 08:31:01.477346       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 08:31:01.477390       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 08:31:01.477806       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 08:31:01.477875       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:31:01.577684       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 08:31:01.577711       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 08:31:01.577661       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:31:05 no-preload-001983 kubelet[721]: I1026 08:31:05.550955     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t47lz\" (UniqueName: \"kubernetes.io/projected/390e2ecb-697d-4556-824a-09e99b456a1a-kube-api-access-t47lz\") pod \"kubernetes-dashboard-855c9754f9-48znz\" (UID: \"390e2ecb-697d-4556-824a-09e99b456a1a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-48znz"
	Oct 26 08:31:05 no-preload-001983 kubelet[721]: I1026 08:31:05.551042     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/390e2ecb-697d-4556-824a-09e99b456a1a-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-48znz\" (UID: \"390e2ecb-697d-4556-824a-09e99b456a1a\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-48znz"
	Oct 26 08:31:06 no-preload-001983 kubelet[721]: I1026 08:31:06.273068     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 08:31:08 no-preload-001983 kubelet[721]: I1026 08:31:08.860635     721 scope.go:117] "RemoveContainer" containerID="993b65361423122c727dca7e516c6dcd43e606047619f2894f832bababbe9234"
	Oct 26 08:31:09 no-preload-001983 kubelet[721]: I1026 08:31:09.867592     721 scope.go:117] "RemoveContainer" containerID="993b65361423122c727dca7e516c6dcd43e606047619f2894f832bababbe9234"
	Oct 26 08:31:09 no-preload-001983 kubelet[721]: I1026 08:31:09.867632     721 scope.go:117] "RemoveContainer" containerID="4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709"
	Oct 26 08:31:09 no-preload-001983 kubelet[721]: E1026 08:31:09.867830     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xps45_kubernetes-dashboard(06a7ef0f-ce83-4570-a3d8-125c827c1c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45" podUID="06a7ef0f-ce83-4570-a3d8-125c827c1c3c"
	Oct 26 08:31:10 no-preload-001983 kubelet[721]: I1026 08:31:10.869891     721 scope.go:117] "RemoveContainer" containerID="4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709"
	Oct 26 08:31:10 no-preload-001983 kubelet[721]: E1026 08:31:10.870083     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xps45_kubernetes-dashboard(06a7ef0f-ce83-4570-a3d8-125c827c1c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45" podUID="06a7ef0f-ce83-4570-a3d8-125c827c1c3c"
	Oct 26 08:31:11 no-preload-001983 kubelet[721]: I1026 08:31:11.890057     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-48znz" podStartSLOduration=1.068496609 podStartE2EDuration="6.890032218s" podCreationTimestamp="2025-10-26 08:31:05 +0000 UTC" firstStartedPulling="2025-10-26 08:31:05.834452095 +0000 UTC m=+7.110743816" lastFinishedPulling="2025-10-26 08:31:11.655987701 +0000 UTC m=+12.932279425" observedRunningTime="2025-10-26 08:31:11.889655603 +0000 UTC m=+13.165947342" watchObservedRunningTime="2025-10-26 08:31:11.890032218 +0000 UTC m=+13.166323954"
	Oct 26 08:31:13 no-preload-001983 kubelet[721]: I1026 08:31:13.373752     721 scope.go:117] "RemoveContainer" containerID="4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709"
	Oct 26 08:31:13 no-preload-001983 kubelet[721]: E1026 08:31:13.373973     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xps45_kubernetes-dashboard(06a7ef0f-ce83-4570-a3d8-125c827c1c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45" podUID="06a7ef0f-ce83-4570-a3d8-125c827c1c3c"
	Oct 26 08:31:26 no-preload-001983 kubelet[721]: I1026 08:31:26.813742     721 scope.go:117] "RemoveContainer" containerID="4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709"
	Oct 26 08:31:26 no-preload-001983 kubelet[721]: I1026 08:31:26.918176     721 scope.go:117] "RemoveContainer" containerID="4248e1542515e317ee60cf0f67c33e1d50a6fee3d3d13b7c5413cbebe9db6709"
	Oct 26 08:31:26 no-preload-001983 kubelet[721]: I1026 08:31:26.918421     721 scope.go:117] "RemoveContainer" containerID="a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64"
	Oct 26 08:31:26 no-preload-001983 kubelet[721]: E1026 08:31:26.918639     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xps45_kubernetes-dashboard(06a7ef0f-ce83-4570-a3d8-125c827c1c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45" podUID="06a7ef0f-ce83-4570-a3d8-125c827c1c3c"
	Oct 26 08:31:32 no-preload-001983 kubelet[721]: I1026 08:31:32.938120     721 scope.go:117] "RemoveContainer" containerID="ad1aac48cb866a8a429a901092cbedec57ca9cb5db6edd6939b3c2894e0dda25"
	Oct 26 08:31:33 no-preload-001983 kubelet[721]: I1026 08:31:33.374298     721 scope.go:117] "RemoveContainer" containerID="a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64"
	Oct 26 08:31:33 no-preload-001983 kubelet[721]: E1026 08:31:33.374525     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xps45_kubernetes-dashboard(06a7ef0f-ce83-4570-a3d8-125c827c1c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45" podUID="06a7ef0f-ce83-4570-a3d8-125c827c1c3c"
	Oct 26 08:31:46 no-preload-001983 kubelet[721]: I1026 08:31:46.813110     721 scope.go:117] "RemoveContainer" containerID="a81919b8384b0edac75a9f5091179670f68d77af4b57ea857d23c0184c42ba64"
	Oct 26 08:31:46 no-preload-001983 kubelet[721]: E1026 08:31:46.813387     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-xps45_kubernetes-dashboard(06a7ef0f-ce83-4570-a3d8-125c827c1c3c)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-xps45" podUID="06a7ef0f-ce83-4570-a3d8-125c827c1c3c"
	Oct 26 08:31:50 no-preload-001983 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 08:31:50 no-preload-001983 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 08:31:50 no-preload-001983 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 08:31:50 no-preload-001983 systemd[1]: kubelet.service: Consumed 1.698s CPU time.
	
	
	==> kubernetes-dashboard [afc141c8a034d2f7011113758f37bcb772b61ac78d0e9bfaaacce15188956d75] <==
	2025/10/26 08:31:11 Using namespace: kubernetes-dashboard
	2025/10/26 08:31:11 Using in-cluster config to connect to apiserver
	2025/10/26 08:31:11 Using secret token for csrf signing
	2025/10/26 08:31:11 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 08:31:11 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 08:31:11 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 08:31:11 Generating JWE encryption key
	2025/10/26 08:31:11 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 08:31:11 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 08:31:12 Initializing JWE encryption key from synchronized object
	2025/10/26 08:31:12 Creating in-cluster Sidecar client
	2025/10/26 08:31:12 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:31:12 Serving insecurely on HTTP port: 9090
	2025/10/26 08:31:42 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:31:11 Starting overwatch
	
	
	==> storage-provisioner [09a9bd3e1f32e9950c69d47307d8f5caef265ec9351e995a734464184843e075] <==
	I1026 08:31:33.051452       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 08:31:33.060157       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 08:31:33.060212       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 08:31:33.064059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:36.519227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:40.779850       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:44.379612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:47.436224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:50.459001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:50.464848       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:31:50.465013       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 08:31:50.465167       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6b8f114b-3680-4914-b270-3b66442ba435", APIVersion:"v1", ResourceVersion:"673", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-001983_67b86529-1dcf-43a1-bb5a-2826f60bfb34 became leader
	I1026 08:31:50.465198       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-001983_67b86529-1dcf-43a1-bb5a-2826f60bfb34!
	W1026 08:31:50.467646       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:50.471734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:31:50.566010       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-001983_67b86529-1dcf-43a1-bb5a-2826f60bfb34!
	W1026 08:31:52.475536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:52.480217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:54.487357       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:54.495155       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ad1aac48cb866a8a429a901092cbedec57ca9cb5db6edd6939b3c2894e0dda25] <==
	I1026 08:31:02.178792       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 08:31:32.181687       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-001983 -n no-preload-001983
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-001983 -n no-preload-001983: exit status 2 (351.67012ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-001983 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-752315 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-752315 --alsologtostderr -v=1: exit status 80 (1.849558154s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-752315 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:32:07.240507  275592 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:32:07.240772  275592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:07.240781  275592 out.go:374] Setting ErrFile to fd 2...
	I1026 08:32:07.240786  275592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:07.241036  275592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:32:07.241336  275592 out.go:368] Setting JSON to false
	I1026 08:32:07.241364  275592 mustload.go:65] Loading cluster: embed-certs-752315
	I1026 08:32:07.241763  275592 config.go:182] Loaded profile config "embed-certs-752315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:07.242206  275592 cli_runner.go:164] Run: docker container inspect embed-certs-752315 --format={{.State.Status}}
	I1026 08:32:07.261847  275592 host.go:66] Checking if "embed-certs-752315" exists ...
	I1026 08:32:07.262302  275592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:07.327697  275592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:80 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-26 08:32:07.316035544 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:07.328528  275592 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-752315 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 08:32:07.331421  275592 out.go:179] * Pausing node embed-certs-752315 ... 
	I1026 08:32:07.332807  275592 host.go:66] Checking if "embed-certs-752315" exists ...
	I1026 08:32:07.333039  275592 ssh_runner.go:195] Run: systemctl --version
	I1026 08:32:07.333074  275592 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-752315
	I1026 08:32:07.355407  275592 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33078 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/embed-certs-752315/id_rsa Username:docker}
	I1026 08:32:07.459361  275592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:32:07.479010  275592 pause.go:52] kubelet running: true
	I1026 08:32:07.479092  275592 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:32:07.701151  275592 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:32:07.701278  275592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:32:07.778761  275592 cri.go:89] found id: "cd0e34a9885583a9a29db7cdcc3d3a07ecdcf1caeb106520ab4774f551b50196"
	I1026 08:32:07.778779  275592 cri.go:89] found id: "16f5a20811e08c9a87436b830181d07a08d7c9c19042686b547a09d115b7077e"
	I1026 08:32:07.778784  275592 cri.go:89] found id: "8fd71ca3934b0c337a8942ef6b2577f1a2eb884b4dd3e8c1621585332293a357"
	I1026 08:32:07.778788  275592 cri.go:89] found id: "b59c79a12396440c5b834d5c3f3895abb0777e31e4f19207a302ce038fb04e94"
	I1026 08:32:07.778792  275592 cri.go:89] found id: "9ad903d67dde66294e4479668d0c5b6cf2ee2a72713eb621ec1ffceff453c1d3"
	I1026 08:32:07.778796  275592 cri.go:89] found id: "b4e2a3adae3b260f24bc34d1fbff56bfc90e781b00b3ef7ade7ad5a02580d3d2"
	I1026 08:32:07.778800  275592 cri.go:89] found id: "0aaa1f21f536e556e63c92670b92d8a3ea70dc7a114b8586e7c128c24f8010e2"
	I1026 08:32:07.778804  275592 cri.go:89] found id: "412f2a653f74cbf8314bc01c58e251aad5fd401f7370feb8ab90dacb1abcda0a"
	I1026 08:32:07.778808  275592 cri.go:89] found id: "53cccbff24b074724ed929ecf8bf44f382faed357e2e31b19207adb2df85cf66"
	I1026 08:32:07.778833  275592 cri.go:89] found id: "03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8"
	I1026 08:32:07.778843  275592 cri.go:89] found id: "4b898bc10d22ebec112eb26c1c60033644c1c9521519a40efded7e7d0fb11a33"
	I1026 08:32:07.778847  275592 cri.go:89] found id: ""
	I1026 08:32:07.778889  275592 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:32:07.791148  275592 retry.go:31] will retry after 149.918362ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:32:07Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:32:07.941590  275592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:32:07.956544  275592 pause.go:52] kubelet running: false
	I1026 08:32:07.956606  275592 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:32:08.118497  275592 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:32:08.118571  275592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:32:08.191287  275592 cri.go:89] found id: "cd0e34a9885583a9a29db7cdcc3d3a07ecdcf1caeb106520ab4774f551b50196"
	I1026 08:32:08.191314  275592 cri.go:89] found id: "16f5a20811e08c9a87436b830181d07a08d7c9c19042686b547a09d115b7077e"
	I1026 08:32:08.191319  275592 cri.go:89] found id: "8fd71ca3934b0c337a8942ef6b2577f1a2eb884b4dd3e8c1621585332293a357"
	I1026 08:32:08.191324  275592 cri.go:89] found id: "b59c79a12396440c5b834d5c3f3895abb0777e31e4f19207a302ce038fb04e94"
	I1026 08:32:08.191329  275592 cri.go:89] found id: "9ad903d67dde66294e4479668d0c5b6cf2ee2a72713eb621ec1ffceff453c1d3"
	I1026 08:32:08.191334  275592 cri.go:89] found id: "b4e2a3adae3b260f24bc34d1fbff56bfc90e781b00b3ef7ade7ad5a02580d3d2"
	I1026 08:32:08.191338  275592 cri.go:89] found id: "0aaa1f21f536e556e63c92670b92d8a3ea70dc7a114b8586e7c128c24f8010e2"
	I1026 08:32:08.191342  275592 cri.go:89] found id: "412f2a653f74cbf8314bc01c58e251aad5fd401f7370feb8ab90dacb1abcda0a"
	I1026 08:32:08.191347  275592 cri.go:89] found id: "53cccbff24b074724ed929ecf8bf44f382faed357e2e31b19207adb2df85cf66"
	I1026 08:32:08.191356  275592 cri.go:89] found id: "03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8"
	I1026 08:32:08.191360  275592 cri.go:89] found id: "4b898bc10d22ebec112eb26c1c60033644c1c9521519a40efded7e7d0fb11a33"
	I1026 08:32:08.191364  275592 cri.go:89] found id: ""
	I1026 08:32:08.191406  275592 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:32:08.203748  275592 retry.go:31] will retry after 538.794093ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:32:08Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:32:08.743473  275592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:32:08.757361  275592 pause.go:52] kubelet running: false
	I1026 08:32:08.757413  275592 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:32:08.923088  275592 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:32:08.923165  275592 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:32:09.001205  275592 cri.go:89] found id: "cd0e34a9885583a9a29db7cdcc3d3a07ecdcf1caeb106520ab4774f551b50196"
	I1026 08:32:09.001234  275592 cri.go:89] found id: "16f5a20811e08c9a87436b830181d07a08d7c9c19042686b547a09d115b7077e"
	I1026 08:32:09.001240  275592 cri.go:89] found id: "8fd71ca3934b0c337a8942ef6b2577f1a2eb884b4dd3e8c1621585332293a357"
	I1026 08:32:09.001245  275592 cri.go:89] found id: "b59c79a12396440c5b834d5c3f3895abb0777e31e4f19207a302ce038fb04e94"
	I1026 08:32:09.001259  275592 cri.go:89] found id: "9ad903d67dde66294e4479668d0c5b6cf2ee2a72713eb621ec1ffceff453c1d3"
	I1026 08:32:09.001264  275592 cri.go:89] found id: "b4e2a3adae3b260f24bc34d1fbff56bfc90e781b00b3ef7ade7ad5a02580d3d2"
	I1026 08:32:09.001268  275592 cri.go:89] found id: "0aaa1f21f536e556e63c92670b92d8a3ea70dc7a114b8586e7c128c24f8010e2"
	I1026 08:32:09.001272  275592 cri.go:89] found id: "412f2a653f74cbf8314bc01c58e251aad5fd401f7370feb8ab90dacb1abcda0a"
	I1026 08:32:09.001276  275592 cri.go:89] found id: "53cccbff24b074724ed929ecf8bf44f382faed357e2e31b19207adb2df85cf66"
	I1026 08:32:09.001288  275592 cri.go:89] found id: "03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8"
	I1026 08:32:09.001292  275592 cri.go:89] found id: "4b898bc10d22ebec112eb26c1c60033644c1c9521519a40efded7e7d0fb11a33"
	I1026 08:32:09.001297  275592 cri.go:89] found id: ""
	I1026 08:32:09.001352  275592 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:32:09.015568  275592 out.go:203] 
	W1026 08:32:09.016692  275592 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:32:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:32:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:32:09.016711  275592 out.go:285] * 
	* 
	W1026 08:32:09.021091  275592 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:32:09.025349  275592 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-752315 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-752315
helpers_test.go:243: (dbg) docker inspect embed-certs-752315:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215",
	        "Created": "2025-10-26T08:30:03.656841768Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258742,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:31:06.064033845Z",
	            "FinishedAt": "2025-10-26T08:31:05.100205627Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215/hostname",
	        "HostsPath": "/var/lib/docker/containers/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215/hosts",
	        "LogPath": "/var/lib/docker/containers/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215-json.log",
	        "Name": "/embed-certs-752315",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-752315:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-752315",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215",
	                "LowerDir": "/var/lib/docker/overlay2/6845dbb109d8d0c47760eee1a1982a045182bb149bbb770f01a93faa904cde6f-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6845dbb109d8d0c47760eee1a1982a045182bb149bbb770f01a93faa904cde6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6845dbb109d8d0c47760eee1a1982a045182bb149bbb770f01a93faa904cde6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6845dbb109d8d0c47760eee1a1982a045182bb149bbb770f01a93faa904cde6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-752315",
	                "Source": "/var/lib/docker/volumes/embed-certs-752315/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-752315",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-752315",
	                "name.minikube.sigs.k8s.io": "embed-certs-752315",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "220ac7a6a1664bed31842bddcd77b605efc1e7e095f15219c0e3836ad97ff4d5",
	            "SandboxKey": "/var/run/docker/netns/220ac7a6a166",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-752315": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:7b:9e:9e:f7:a1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d5aa8ca4605176daf87c9c9f24c1c35f5c6618444861770e8529506402674500",
	                    "EndpointID": "c058fcf50929a5e59f5494e75870dd9dd045cd9154e6c62f2c61f86c0a87e206",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-752315",
	                        "8eca8953ad72"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-752315 -n embed-certs-752315
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-752315 -n embed-certs-752315: exit status 2 (340.051134ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-752315 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-752315 logs -n 25: (1.202632994s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-001983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p no-preload-001983 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable metrics-server -p embed-certs-752315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p embed-certs-752315 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable dashboard -p no-preload-001983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-752315 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ image   │ old-k8s-version-810379 image list --format=json                                                                                                                                                                                               │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p old-k8s-version-810379 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p old-k8s-version-810379                                                                                                                                                                                                                     │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ delete  │ -p old-k8s-version-810379                                                                                                                                                                                                                     │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ delete  │ -p disable-driver-mounts-209240                                                                                                                                                                                                               │ disable-driver-mounts-209240 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p default-k8s-diff-port-866212 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ image   │ no-preload-001983 image list --format=json                                                                                                                                                                                                    │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p no-preload-001983 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-462840                                                                                                                                                                                                                  │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p no-preload-001983                                                                                                                                                                                                                          │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p no-preload-001983                                                                                                                                                                                                                          │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p auto-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ image   │ embed-certs-752315 image list --format=json                                                                                                                                                                                                   │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ pause   │ -p embed-certs-752315 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:32:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:32:00.490412  273227 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:32:00.490682  273227 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:00.490694  273227 out.go:374] Setting ErrFile to fd 2...
	I1026 08:32:00.490699  273227 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:00.490990  273227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:32:00.491492  273227 out.go:368] Setting JSON to false
	I1026 08:32:00.492613  273227 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4471,"bootTime":1761463049,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:32:00.492697  273227 start.go:141] virtualization: kvm guest
	I1026 08:32:00.494601  273227 out.go:179] * [auto-110992] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:32:00.496107  273227 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:32:00.496095  273227 notify.go:220] Checking for updates...
	I1026 08:32:00.501725  273227 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:32:00.502963  273227 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:32:00.504471  273227 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:32:00.505791  273227 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:32:00.506891  273227 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:32:00.508773  273227 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:00.508927  273227 config.go:182] Loaded profile config "embed-certs-752315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:00.509084  273227 config.go:182] Loaded profile config "newest-cni-366970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:00.509207  273227 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:32:00.535430  273227 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:32:00.535553  273227 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:00.594940  273227 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 08:32:00.584446159 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:00.595077  273227 docker.go:318] overlay module found
	I1026 08:32:00.597081  273227 out.go:179] * Using the docker driver based on user configuration
	I1026 08:32:00.598487  273227 start.go:305] selected driver: docker
	I1026 08:32:00.598508  273227 start.go:925] validating driver "docker" against <nil>
	I1026 08:32:00.598523  273227 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:32:00.599365  273227 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:00.658825  273227 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 08:32:00.648902819 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:00.658982  273227 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:32:00.659211  273227 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:32:00.661215  273227 out.go:179] * Using Docker driver with root privileges
	I1026 08:32:00.662512  273227 cni.go:84] Creating CNI manager for ""
	I1026 08:32:00.662576  273227 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:32:00.662587  273227 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 08:32:00.662652  273227 start.go:349] cluster config:
	{Name:auto-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1026 08:32:00.664056  273227 out.go:179] * Starting "auto-110992" primary control-plane node in "auto-110992" cluster
	I1026 08:32:00.665333  273227 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:32:00.666648  273227 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:32:00.667844  273227 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:00.667887  273227 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:32:00.667884  273227 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:32:00.667896  273227 cache.go:58] Caching tarball of preloaded images
	I1026 08:32:00.668006  273227 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:32:00.668020  273227 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:32:00.668137  273227 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/config.json ...
	I1026 08:32:00.668160  273227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/config.json: {Name:mk9a603c818bfb8aee3ce9258672b2a135ca6e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:00.689081  273227 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:32:00.689100  273227 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:32:00.689119  273227 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:32:00.689150  273227 start.go:360] acquireMachinesLock for auto-110992: {Name:mk20dec79305eb324248958d5953c5e7e46e96f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:32:00.689271  273227 start.go:364] duration metric: took 81.294µs to acquireMachinesLock for "auto-110992"
	I1026 08:32:00.689303  273227 start.go:93] Provisioning new machine with config: &{Name:auto-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-110992 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:32:00.689377  273227 start.go:125] createHost starting for "" (driver="docker")
	W1026 08:31:58.286439  264509 node_ready.go:57] node "default-k8s-diff-port-866212" has "Ready":"False" status (will retry)
	W1026 08:32:00.786204  264509 node_ready.go:57] node "default-k8s-diff-port-866212" has "Ready":"False" status (will retry)
	I1026 08:31:59.004334  270203 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-366970:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.790969179s)
	I1026 08:31:59.004368  270203 kic.go:203] duration metric: took 4.791173084s to extract preloaded images to volume ...
	W1026 08:31:59.004456  270203 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 08:31:59.004503  270203 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 08:31:59.004547  270203 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 08:31:59.063210  270203 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-366970 --name newest-cni-366970 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-366970 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-366970 --network newest-cni-366970 --ip 192.168.85.2 --volume newest-cni-366970:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 08:31:59.359560  270203 cli_runner.go:164] Run: docker container inspect newest-cni-366970 --format={{.State.Running}}
	I1026 08:31:59.379079  270203 cli_runner.go:164] Run: docker container inspect newest-cni-366970 --format={{.State.Status}}
	I1026 08:31:59.399125  270203 cli_runner.go:164] Run: docker exec newest-cni-366970 stat /var/lib/dpkg/alternatives/iptables
	I1026 08:31:59.444850  270203 oci.go:144] the created container "newest-cni-366970" has a running status.
	I1026 08:31:59.444883  270203 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa...
	I1026 08:31:59.791240  270203 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 08:31:59.999317  270203 cli_runner.go:164] Run: docker container inspect newest-cni-366970 --format={{.State.Status}}
	I1026 08:32:00.019575  270203 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 08:32:00.019599  270203 kic_runner.go:114] Args: [docker exec --privileged newest-cni-366970 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 08:32:00.071768  270203 cli_runner.go:164] Run: docker container inspect newest-cni-366970 --format={{.State.Status}}
	I1026 08:32:00.091906  270203 machine.go:93] provisionDockerMachine start ...
	I1026 08:32:00.092008  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:00.111533  270203 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:00.111811  270203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1026 08:32:00.111830  270203 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:32:00.258348  270203 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-366970
	
	I1026 08:32:00.258378  270203 ubuntu.go:182] provisioning hostname "newest-cni-366970"
	I1026 08:32:00.258437  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:00.278883  270203 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:00.279120  270203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1026 08:32:00.279140  270203 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-366970 && echo "newest-cni-366970" | sudo tee /etc/hostname
	I1026 08:32:00.431811  270203 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-366970
	
	I1026 08:32:00.431923  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:00.451034  270203 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:00.451236  270203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1026 08:32:00.451293  270203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-366970' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-366970/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-366970' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:32:00.597463  270203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:32:00.597487  270203 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:32:00.597517  270203 ubuntu.go:190] setting up certificates
	I1026 08:32:00.597530  270203 provision.go:84] configureAuth start
	I1026 08:32:00.597596  270203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-366970
	I1026 08:32:00.618302  270203 provision.go:143] copyHostCerts
	I1026 08:32:00.618371  270203 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:32:00.618381  270203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:32:00.618473  270203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:32:00.618615  270203 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:32:00.618625  270203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:32:00.618668  270203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:32:00.618773  270203 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:32:00.618794  270203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:32:00.618839  270203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:32:00.618929  270203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.newest-cni-366970 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-366970]
	I1026 08:32:00.759440  270203 provision.go:177] copyRemoteCerts
	I1026 08:32:00.759513  270203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:32:00.759559  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:00.781259  270203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:00.885539  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:32:00.907147  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 08:32:00.925481  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 08:32:00.945519  270203 provision.go:87] duration metric: took 347.972983ms to configureAuth
	I1026 08:32:00.945554  270203 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:32:00.945746  270203 config.go:182] Loaded profile config "newest-cni-366970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:00.945854  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:00.967537  270203 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:00.967732  270203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1026 08:32:00.967749  270203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:32:01.251501  270203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:32:01.251539  270203 machine.go:96] duration metric: took 1.159607829s to provisionDockerMachine
	I1026 08:32:01.251554  270203 client.go:171] duration metric: took 7.731687004s to LocalClient.Create
	I1026 08:32:01.251579  270203 start.go:167] duration metric: took 7.731761794s to libmachine.API.Create "newest-cni-366970"
	I1026 08:32:01.251593  270203 start.go:293] postStartSetup for "newest-cni-366970" (driver="docker")
	I1026 08:32:01.251606  270203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:32:01.251671  270203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:32:01.251719  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:01.273398  270203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:01.383963  270203 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:32:01.388177  270203 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:32:01.388213  270203 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:32:01.388225  270203 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:32:01.388295  270203 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:32:01.388385  270203 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:32:01.388477  270203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:32:01.398827  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:32:01.428207  270203 start.go:296] duration metric: took 176.591146ms for postStartSetup
	I1026 08:32:01.428588  270203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-366970
	I1026 08:32:01.448722  270203 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/config.json ...
	I1026 08:32:01.448991  270203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:32:01.449040  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:01.472013  270203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:01.573716  270203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:32:01.578740  270203 start.go:128] duration metric: took 8.061588923s to createHost
	I1026 08:32:01.578772  270203 start.go:83] releasing machines lock for "newest-cni-366970", held for 8.061735624s
	I1026 08:32:01.578853  270203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-366970
	I1026 08:32:01.598742  270203 ssh_runner.go:195] Run: cat /version.json
	I1026 08:32:01.598795  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:01.598825  270203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:32:01.598891  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:01.621535  270203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:01.621866  270203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:01.778373  270203 ssh_runner.go:195] Run: systemctl --version
	I1026 08:32:01.785728  270203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:32:01.827382  270203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:32:01.832449  270203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:32:01.832520  270203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:32:01.864376  270203 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 08:32:01.864404  270203 start.go:495] detecting cgroup driver to use...
	I1026 08:32:01.864439  270203 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:32:01.864490  270203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:32:01.881218  270203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:32:01.894614  270203 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:32:01.894689  270203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:32:01.913482  270203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:32:01.936797  270203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:32:02.028291  270203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:32:02.149718  270203 docker.go:234] disabling docker service ...
	I1026 08:32:02.149781  270203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:32:02.169961  270203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:32:02.183155  270203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:32:02.284044  270203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:32:02.376715  270203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:32:02.390520  270203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:32:02.405215  270203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:32:02.405307  270203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.417807  270203 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:32:02.417880  270203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.427528  270203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.436858  270203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.446656  270203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:32:02.455123  270203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.464292  270203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.478476  270203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.487632  270203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:32:02.495385  270203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:32:02.503233  270203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:32:02.587450  270203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:32:04.932480  270203 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.34498891s)
	I1026 08:32:04.932524  270203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:32:04.932582  270203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:32:04.937612  270203 start.go:563] Will wait 60s for crictl version
	I1026 08:32:04.937673  270203 ssh_runner.go:195] Run: which crictl
	I1026 08:32:04.942050  270203 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:32:04.970730  270203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:32:04.970807  270203 ssh_runner.go:195] Run: crio --version
	I1026 08:32:05.001197  270203 ssh_runner.go:195] Run: crio --version
	I1026 08:32:05.036425  270203 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:32:05.039927  270203 cli_runner.go:164] Run: docker network inspect newest-cni-366970 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:32:05.059362  270203 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 08:32:05.063577  270203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:32:05.077595  270203 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 08:32:00.692208  273227 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 08:32:00.692484  273227 start.go:159] libmachine.API.Create for "auto-110992" (driver="docker")
	I1026 08:32:00.692547  273227 client.go:168] LocalClient.Create starting
	I1026 08:32:00.692647  273227 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem
	I1026 08:32:00.692693  273227 main.go:141] libmachine: Decoding PEM data...
	I1026 08:32:00.692718  273227 main.go:141] libmachine: Parsing certificate...
	I1026 08:32:00.692792  273227 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem
	I1026 08:32:00.692822  273227 main.go:141] libmachine: Decoding PEM data...
	I1026 08:32:00.692838  273227 main.go:141] libmachine: Parsing certificate...
	I1026 08:32:00.693231  273227 cli_runner.go:164] Run: docker network inspect auto-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 08:32:00.711498  273227 cli_runner.go:211] docker network inspect auto-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 08:32:00.711573  273227 network_create.go:284] running [docker network inspect auto-110992] to gather additional debugging logs...
	I1026 08:32:00.711593  273227 cli_runner.go:164] Run: docker network inspect auto-110992
	W1026 08:32:00.728518  273227 cli_runner.go:211] docker network inspect auto-110992 returned with exit code 1
	I1026 08:32:00.728558  273227 network_create.go:287] error running [docker network inspect auto-110992]: docker network inspect auto-110992: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-110992 not found
	I1026 08:32:00.728575  273227 network_create.go:289] output of [docker network inspect auto-110992]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-110992 not found
	
	** /stderr **
	I1026 08:32:00.728665  273227 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:32:00.746727  273227 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c18b67b7e42d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:70:41:72:e4:6d} reservation:<nil>}
	I1026 08:32:00.747494  273227 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd6ed9f615a5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:78:96:65:8c:60} reservation:<nil>}
	I1026 08:32:00.748216  273227 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2a983bf4577 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:62:ae:31:43:82} reservation:<nil>}
	I1026 08:32:00.749006  273227 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee58c0}
	I1026 08:32:00.749028  273227 network_create.go:124] attempt to create docker network auto-110992 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 08:32:00.749074  273227 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-110992 auto-110992
	I1026 08:32:00.814237  273227 network_create.go:108] docker network auto-110992 192.168.76.0/24 created
	I1026 08:32:00.814294  273227 kic.go:121] calculated static IP "192.168.76.2" for the "auto-110992" container
	I1026 08:32:00.814358  273227 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 08:32:00.833280  273227 cli_runner.go:164] Run: docker volume create auto-110992 --label name.minikube.sigs.k8s.io=auto-110992 --label created_by.minikube.sigs.k8s.io=true
	I1026 08:32:00.852227  273227 oci.go:103] Successfully created a docker volume auto-110992
	I1026 08:32:00.852342  273227 cli_runner.go:164] Run: docker run --rm --name auto-110992-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-110992 --entrypoint /usr/bin/test -v auto-110992:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 08:32:01.269602  273227 oci.go:107] Successfully prepared a docker volume auto-110992
	I1026 08:32:01.269658  273227 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:01.269685  273227 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 08:32:01.269747  273227 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-110992:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 08:32:04.845492  273227 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-110992:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.575692475s)
	I1026 08:32:04.845540  273227 kic.go:203] duration metric: took 3.575850558s to extract preloaded images to volume ...
	W1026 08:32:04.845629  273227 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 08:32:04.845658  273227 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 08:32:04.845694  273227 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 08:32:04.907470  273227 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-110992 --name auto-110992 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-110992 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-110992 --network auto-110992 --ip 192.168.76.2 --volume auto-110992:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 08:32:05.202203  273227 cli_runner.go:164] Run: docker container inspect auto-110992 --format={{.State.Running}}
	I1026 08:32:05.221535  273227 cli_runner.go:164] Run: docker container inspect auto-110992 --format={{.State.Status}}
	I1026 08:32:05.241808  273227 cli_runner.go:164] Run: docker exec auto-110992 stat /var/lib/dpkg/alternatives/iptables
	I1026 08:32:05.290823  273227 oci.go:144] the created container "auto-110992" has a running status.
	I1026 08:32:05.290863  273227 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa...
	I1026 08:32:05.477072  273227 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	W1026 08:32:03.286431  264509 node_ready.go:57] node "default-k8s-diff-port-866212" has "Ready":"False" status (will retry)
	I1026 08:32:05.786495  264509 node_ready.go:49] node "default-k8s-diff-port-866212" is "Ready"
	I1026 08:32:05.786522  264509 node_ready.go:38] duration metric: took 11.503921605s for node "default-k8s-diff-port-866212" to be "Ready" ...
	I1026 08:32:05.786536  264509 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:32:05.786590  264509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:32:05.801570  264509 api_server.go:72] duration metric: took 11.881600858s to wait for apiserver process to appear ...
	I1026 08:32:05.801598  264509 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:32:05.801620  264509 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1026 08:32:05.806467  264509 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1026 08:32:05.807576  264509 api_server.go:141] control plane version: v1.34.1
	I1026 08:32:05.807605  264509 api_server.go:131] duration metric: took 5.998272ms to wait for apiserver health ...
	I1026 08:32:05.807616  264509 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:32:05.811419  264509 system_pods.go:59] 8 kube-system pods found
	I1026 08:32:05.811455  264509 system_pods.go:61] "coredns-66bc5c9577-h4dk5" [18fbe340-fefc-49cc-9816-4af780af38c5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:05.811464  264509 system_pods.go:61] "etcd-default-k8s-diff-port-866212" [8c44096f-2caa-4b06-8008-833c59cb7f25] Running
	I1026 08:32:05.811470  264509 system_pods.go:61] "kindnet-vr7fg" [c665249b-007a-4348-8905-c4ba71426d5c] Running
	I1026 08:32:05.811474  264509 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866212" [83aa3d99-80e0-4549-a9d5-d5f4b309a928] Running
	I1026 08:32:05.811478  264509 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866212" [400815d7-5817-430e-b1f6-0b0b34f79556] Running
	I1026 08:32:05.811481  264509 system_pods.go:61] "kube-proxy-m4gfc" [029bb2f9-cc20-4deb-8eca-da1405fd2c84] Running
	I1026 08:32:05.811485  264509 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866212" [62690e71-a96d-4050-a717-f4ebdd785342] Running
	I1026 08:32:05.811490  264509 system_pods.go:61] "storage-provisioner" [a87f2f9f-e47d-4081-b53e-0b0017e791ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:05.811496  264509 system_pods.go:74] duration metric: took 3.874056ms to wait for pod list to return data ...
	I1026 08:32:05.811506  264509 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:32:05.813910  264509 default_sa.go:45] found service account: "default"
	I1026 08:32:05.813930  264509 default_sa.go:55] duration metric: took 2.417606ms for default service account to be created ...
	I1026 08:32:05.813939  264509 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:32:05.819794  264509 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:05.819834  264509 system_pods.go:89] "coredns-66bc5c9577-h4dk5" [18fbe340-fefc-49cc-9816-4af780af38c5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:05.819842  264509 system_pods.go:89] "etcd-default-k8s-diff-port-866212" [8c44096f-2caa-4b06-8008-833c59cb7f25] Running
	I1026 08:32:05.819852  264509 system_pods.go:89] "kindnet-vr7fg" [c665249b-007a-4348-8905-c4ba71426d5c] Running
	I1026 08:32:05.819859  264509 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866212" [83aa3d99-80e0-4549-a9d5-d5f4b309a928] Running
	I1026 08:32:05.819864  264509 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866212" [400815d7-5817-430e-b1f6-0b0b34f79556] Running
	I1026 08:32:05.819880  264509 system_pods.go:89] "kube-proxy-m4gfc" [029bb2f9-cc20-4deb-8eca-da1405fd2c84] Running
	I1026 08:32:05.819887  264509 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866212" [62690e71-a96d-4050-a717-f4ebdd785342] Running
	I1026 08:32:05.819899  264509 system_pods.go:89] "storage-provisioner" [a87f2f9f-e47d-4081-b53e-0b0017e791ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:05.819930  264509 retry.go:31] will retry after 277.373967ms: missing components: kube-dns
	I1026 08:32:06.100912  264509 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:06.100951  264509 system_pods.go:89] "coredns-66bc5c9577-h4dk5" [18fbe340-fefc-49cc-9816-4af780af38c5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:06.100959  264509 system_pods.go:89] "etcd-default-k8s-diff-port-866212" [8c44096f-2caa-4b06-8008-833c59cb7f25] Running
	I1026 08:32:06.100967  264509 system_pods.go:89] "kindnet-vr7fg" [c665249b-007a-4348-8905-c4ba71426d5c] Running
	I1026 08:32:06.100972  264509 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866212" [83aa3d99-80e0-4549-a9d5-d5f4b309a928] Running
	I1026 08:32:06.100977  264509 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866212" [400815d7-5817-430e-b1f6-0b0b34f79556] Running
	I1026 08:32:06.100983  264509 system_pods.go:89] "kube-proxy-m4gfc" [029bb2f9-cc20-4deb-8eca-da1405fd2c84] Running
	I1026 08:32:06.100988  264509 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866212" [62690e71-a96d-4050-a717-f4ebdd785342] Running
	I1026 08:32:06.100995  264509 system_pods.go:89] "storage-provisioner" [a87f2f9f-e47d-4081-b53e-0b0017e791ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:06.101017  264509 retry.go:31] will retry after 261.67719ms: missing components: kube-dns
	I1026 08:32:06.367271  264509 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:06.367301  264509 system_pods.go:89] "coredns-66bc5c9577-h4dk5" [18fbe340-fefc-49cc-9816-4af780af38c5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:06.367307  264509 system_pods.go:89] "etcd-default-k8s-diff-port-866212" [8c44096f-2caa-4b06-8008-833c59cb7f25] Running
	I1026 08:32:06.367312  264509 system_pods.go:89] "kindnet-vr7fg" [c665249b-007a-4348-8905-c4ba71426d5c] Running
	I1026 08:32:06.367322  264509 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866212" [83aa3d99-80e0-4549-a9d5-d5f4b309a928] Running
	I1026 08:32:06.367328  264509 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866212" [400815d7-5817-430e-b1f6-0b0b34f79556] Running
	I1026 08:32:06.367335  264509 system_pods.go:89] "kube-proxy-m4gfc" [029bb2f9-cc20-4deb-8eca-da1405fd2c84] Running
	I1026 08:32:06.367340  264509 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866212" [62690e71-a96d-4050-a717-f4ebdd785342] Running
	I1026 08:32:06.367348  264509 system_pods.go:89] "storage-provisioner" [a87f2f9f-e47d-4081-b53e-0b0017e791ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:06.367370  264509 retry.go:31] will retry after 377.967656ms: missing components: kube-dns
	I1026 08:32:06.749498  264509 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:06.749538  264509 system_pods.go:89] "coredns-66bc5c9577-h4dk5" [18fbe340-fefc-49cc-9816-4af780af38c5] Running
	I1026 08:32:06.749551  264509 system_pods.go:89] "etcd-default-k8s-diff-port-866212" [8c44096f-2caa-4b06-8008-833c59cb7f25] Running
	I1026 08:32:06.749559  264509 system_pods.go:89] "kindnet-vr7fg" [c665249b-007a-4348-8905-c4ba71426d5c] Running
	I1026 08:32:06.749565  264509 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866212" [83aa3d99-80e0-4549-a9d5-d5f4b309a928] Running
	I1026 08:32:06.749571  264509 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866212" [400815d7-5817-430e-b1f6-0b0b34f79556] Running
	I1026 08:32:06.749575  264509 system_pods.go:89] "kube-proxy-m4gfc" [029bb2f9-cc20-4deb-8eca-da1405fd2c84] Running
	I1026 08:32:06.749580  264509 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866212" [62690e71-a96d-4050-a717-f4ebdd785342] Running
	I1026 08:32:06.749584  264509 system_pods.go:89] "storage-provisioner" [a87f2f9f-e47d-4081-b53e-0b0017e791ae] Running
	I1026 08:32:06.749595  264509 system_pods.go:126] duration metric: took 935.648535ms to wait for k8s-apps to be running ...
	I1026 08:32:06.749604  264509 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:32:06.749655  264509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:32:06.766064  264509 system_svc.go:56] duration metric: took 16.4492ms WaitForService to wait for kubelet
	I1026 08:32:06.766107  264509 kubeadm.go:586] duration metric: took 12.846142972s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:32:06.766130  264509 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:32:06.769049  264509 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:32:06.769077  264509 node_conditions.go:123] node cpu capacity is 8
	I1026 08:32:06.769093  264509 node_conditions.go:105] duration metric: took 2.957803ms to run NodePressure ...
	I1026 08:32:06.769108  264509 start.go:241] waiting for startup goroutines ...
	I1026 08:32:06.769119  264509 start.go:246] waiting for cluster config update ...
	I1026 08:32:06.769133  264509 start.go:255] writing updated cluster config ...
	I1026 08:32:06.769438  264509 ssh_runner.go:195] Run: rm -f paused
	I1026 08:32:06.773831  264509 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:32:06.778652  264509 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h4dk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:06.783420  264509 pod_ready.go:94] pod "coredns-66bc5c9577-h4dk5" is "Ready"
	I1026 08:32:06.783443  264509 pod_ready.go:86] duration metric: took 4.769673ms for pod "coredns-66bc5c9577-h4dk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:06.785740  264509 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:06.790417  264509 pod_ready.go:94] pod "etcd-default-k8s-diff-port-866212" is "Ready"
	I1026 08:32:06.790439  264509 pod_ready.go:86] duration metric: took 4.67904ms for pod "etcd-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:06.792609  264509 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:06.796907  264509 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-866212" is "Ready"
	I1026 08:32:06.796929  264509 pod_ready.go:86] duration metric: took 4.29219ms for pod "kube-apiserver-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:06.798851  264509 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:07.179385  264509 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-866212" is "Ready"
	I1026 08:32:07.179417  264509 pod_ready.go:86] duration metric: took 380.546105ms for pod "kube-controller-manager-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:07.379177  264509 pod_ready.go:83] waiting for pod "kube-proxy-m4gfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:07.778473  264509 pod_ready.go:94] pod "kube-proxy-m4gfc" is "Ready"
	I1026 08:32:07.778499  264509 pod_ready.go:86] duration metric: took 399.292251ms for pod "kube-proxy-m4gfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:05.078989  270203 kubeadm.go:883] updating cluster {Name:newest-cni-366970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-366970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:32:05.079154  270203 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:05.079234  270203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:32:05.119971  270203 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:32:05.119994  270203 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:32:05.120041  270203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:32:05.149466  270203 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:32:05.149489  270203 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:32:05.149499  270203 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 08:32:05.149590  270203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-366970 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-366970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:32:05.149671  270203 ssh_runner.go:195] Run: crio config
	I1026 08:32:05.200918  270203 cni.go:84] Creating CNI manager for ""
	I1026 08:32:05.200941  270203 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:32:05.200957  270203 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 08:32:05.200979  270203 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-366970 NodeName:newest-cni-366970 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:32:05.201132  270203 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-366970"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:32:05.201193  270203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:32:05.210646  270203 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:32:05.210714  270203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:32:05.219556  270203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 08:32:05.234046  270203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:32:05.251904  270203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1026 08:32:05.267439  270203 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:32:05.271514  270203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:32:05.281588  270203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:32:05.377197  270203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:32:05.409038  270203 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970 for IP: 192.168.85.2
	I1026 08:32:05.409061  270203 certs.go:195] generating shared ca certs ...
	I1026 08:32:05.409080  270203 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:05.409234  270203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:32:05.409303  270203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:32:05.409315  270203 certs.go:257] generating profile certs ...
	I1026 08:32:05.409377  270203 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/client.key
	I1026 08:32:05.409396  270203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/client.crt with IP's: []
	I1026 08:32:05.737016  270203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/client.crt ...
	I1026 08:32:05.737042  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/client.crt: {Name:mked65a6c31d8090d1294b99baec89ed05a55f1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:05.737217  270203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/client.key ...
	I1026 08:32:05.737233  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/client.key: {Name:mk1775f3987869ce392487a7e3e3ef4d1bec339a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:05.737380  270203 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.key.8b551237
	I1026 08:32:05.737421  270203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.crt.8b551237 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1026 08:32:05.968221  270203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.crt.8b551237 ...
	I1026 08:32:05.968265  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.crt.8b551237: {Name:mk44b72c6f2d5ba13c5724b22d45f17fdad9f076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:05.968415  270203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.key.8b551237 ...
	I1026 08:32:05.968429  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.key.8b551237: {Name:mk7ada4f016025b914a1cd6eaf66cd7314d045a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:05.968534  270203 certs.go:382] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.crt.8b551237 -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.crt
	I1026 08:32:05.968612  270203 certs.go:386] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.key.8b551237 -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.key
	I1026 08:32:05.968671  270203 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.key
	I1026 08:32:05.968687  270203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.crt with IP's: []
	I1026 08:32:06.482626  270203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.crt ...
	I1026 08:32:06.482657  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.crt: {Name:mk302511fc3bdeeb11e202f01e4b462222003b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:06.482827  270203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.key ...
	I1026 08:32:06.482840  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.key: {Name:mka6ec3ba5bd6fb50b5978e23b03914eaace95f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:06.483033  270203 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:32:06.483075  270203 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:32:06.483082  270203 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:32:06.483103  270203 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:32:06.483127  270203 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:32:06.483155  270203 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:32:06.483194  270203 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:32:06.483778  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:32:06.502984  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:32:06.520891  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:32:06.540026  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:32:06.558469  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 08:32:06.576775  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:32:06.594636  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:32:06.613439  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:32:06.631635  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:32:06.651886  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:32:06.670940  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:32:06.688894  270203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:32:06.701747  270203 ssh_runner.go:195] Run: openssl version
	I1026 08:32:06.709042  270203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:32:06.719492  270203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:32:06.724410  270203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:32:06.724478  270203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:32:06.773556  270203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:32:06.784694  270203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:32:06.795414  270203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:32:06.800089  270203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:32:06.800169  270203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:32:06.838113  270203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:32:06.848434  270203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:32:06.858026  270203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:32:06.861970  270203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:32:06.862031  270203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:32:06.904996  270203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:32:06.916483  270203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:32:06.921168  270203 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 08:32:06.921236  270203 kubeadm.go:400] StartCluster: {Name:newest-cni-366970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-366970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:32:06.921336  270203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:32:06.921405  270203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:32:06.962682  270203 cri.go:89] found id: ""
	I1026 08:32:06.962750  270203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:32:06.972025  270203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 08:32:06.983189  270203 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 08:32:06.983243  270203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 08:32:06.993821  270203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 08:32:06.993845  270203 kubeadm.go:157] found existing configuration files:
	
	I1026 08:32:06.993887  270203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 08:32:07.003633  270203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 08:32:07.003690  270203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 08:32:07.012590  270203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 08:32:07.021875  270203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 08:32:07.021926  270203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 08:32:07.031595  270203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 08:32:07.040147  270203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 08:32:07.040203  270203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 08:32:07.048317  270203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 08:32:07.055822  270203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 08:32:07.055874  270203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 08:32:07.064117  270203 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 08:32:07.129372  270203 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 08:32:07.205489  270203 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 08:32:07.979366  264509 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:08.378933  264509 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-866212" is "Ready"
	I1026 08:32:08.378957  264509 pod_ready.go:86] duration metric: took 399.563567ms for pod "kube-scheduler-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:08.378969  264509 pod_ready.go:40] duration metric: took 1.605105815s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:32:08.429122  264509 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:32:08.430968  264509 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-866212" cluster and "default" namespace by default
	I1026 08:32:05.506495  273227 cli_runner.go:164] Run: docker container inspect auto-110992 --format={{.State.Status}}
	I1026 08:32:05.540710  273227 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 08:32:05.540733  273227 kic_runner.go:114] Args: [docker exec --privileged auto-110992 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 08:32:05.592748  273227 cli_runner.go:164] Run: docker container inspect auto-110992 --format={{.State.Status}}
	I1026 08:32:05.617329  273227 machine.go:93] provisionDockerMachine start ...
	I1026 08:32:05.617435  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:05.640933  273227 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:05.641316  273227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1026 08:32:05.641341  273227 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:32:05.792357  273227 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-110992
	
	I1026 08:32:05.792387  273227 ubuntu.go:182] provisioning hostname "auto-110992"
	I1026 08:32:05.792447  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:05.815405  273227 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:05.815691  273227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1026 08:32:05.815713  273227 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-110992 && echo "auto-110992" | sudo tee /etc/hostname
	I1026 08:32:05.974533  273227 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-110992
	
	I1026 08:32:05.974609  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:05.996186  273227 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:05.996486  273227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1026 08:32:05.996520  273227 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-110992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-110992/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-110992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:32:06.154830  273227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:32:06.154865  273227 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:32:06.154896  273227 ubuntu.go:190] setting up certificates
	I1026 08:32:06.154907  273227 provision.go:84] configureAuth start
	I1026 08:32:06.154962  273227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-110992
	I1026 08:32:06.174548  273227 provision.go:143] copyHostCerts
	I1026 08:32:06.174604  273227 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:32:06.174614  273227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:32:06.174729  273227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:32:06.174837  273227 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:32:06.174851  273227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:32:06.174893  273227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:32:06.174970  273227 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:32:06.174981  273227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:32:06.175016  273227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:32:06.175095  273227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.auto-110992 san=[127.0.0.1 192.168.76.2 auto-110992 localhost minikube]
	I1026 08:32:06.518886  273227 provision.go:177] copyRemoteCerts
	I1026 08:32:06.518947  273227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:32:06.518984  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:06.537125  273227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa Username:docker}
	I1026 08:32:06.638429  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:32:06.659732  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1026 08:32:06.679331  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:32:06.697009  273227 provision.go:87] duration metric: took 542.084588ms to configureAuth
	I1026 08:32:06.697050  273227 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:32:06.697202  273227 config.go:182] Loaded profile config "auto-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:06.697321  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:06.716363  273227 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:06.716652  273227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1026 08:32:06.716675  273227 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:32:07.001340  273227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:32:07.001372  273227 machine.go:96] duration metric: took 1.384012506s to provisionDockerMachine
	I1026 08:32:07.001385  273227 client.go:171] duration metric: took 6.308826562s to LocalClient.Create
	I1026 08:32:07.001401  273227 start.go:167] duration metric: took 6.308918019s to libmachine.API.Create "auto-110992"
	I1026 08:32:07.001409  273227 start.go:293] postStartSetup for "auto-110992" (driver="docker")
	I1026 08:32:07.001420  273227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:32:07.001492  273227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:32:07.001540  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:07.023277  273227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa Username:docker}
	I1026 08:32:07.130947  273227 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:32:07.135548  273227 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:32:07.135579  273227 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:32:07.135591  273227 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:32:07.135643  273227 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:32:07.135737  273227 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:32:07.135853  273227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:32:07.144946  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:32:07.169512  273227 start.go:296] duration metric: took 168.087768ms for postStartSetup
	I1026 08:32:07.169910  273227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-110992
	I1026 08:32:07.192063  273227 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/config.json ...
	I1026 08:32:07.192403  273227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:32:07.192461  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:07.213189  273227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa Username:docker}
	I1026 08:32:07.313919  273227 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:32:07.319419  273227 start.go:128] duration metric: took 6.630027588s to createHost
	I1026 08:32:07.319456  273227 start.go:83] releasing machines lock for "auto-110992", held for 6.630169587s
	I1026 08:32:07.319525  273227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-110992
	I1026 08:32:07.340972  273227 ssh_runner.go:195] Run: cat /version.json
	I1026 08:32:07.341029  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:07.341033  273227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:32:07.341101  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:07.361920  273227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa Username:docker}
	I1026 08:32:07.362212  273227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa Username:docker}
	I1026 08:32:07.459265  273227 ssh_runner.go:195] Run: systemctl --version
	I1026 08:32:07.522135  273227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:32:07.565331  273227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:32:07.570371  273227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:32:07.570444  273227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:32:07.598494  273227 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 08:32:07.598515  273227 start.go:495] detecting cgroup driver to use...
	I1026 08:32:07.598542  273227 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:32:07.598580  273227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:32:07.617183  273227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:32:07.630262  273227 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:32:07.630328  273227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:32:07.647531  273227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:32:07.665358  273227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:32:07.774067  273227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:32:07.870404  273227 docker.go:234] disabling docker service ...
	I1026 08:32:07.870473  273227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:32:07.889397  273227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:32:07.902564  273227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:32:07.991696  273227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:32:08.087478  273227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:32:08.100344  273227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:32:08.115325  273227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:32:08.115394  273227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.126985  273227 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:32:08.127049  273227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.137795  273227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.148380  273227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.158236  273227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:32:08.168389  273227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.178343  273227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.194388  273227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.203637  273227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:32:08.211542  273227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:32:08.218786  273227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:32:08.299284  273227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:32:08.419486  273227 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:32:08.419556  273227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:32:08.423769  273227 start.go:563] Will wait 60s for crictl version
	I1026 08:32:08.423842  273227 ssh_runner.go:195] Run: which crictl
	I1026 08:32:08.428765  273227 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:32:08.462652  273227 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:32:08.462732  273227 ssh_runner.go:195] Run: crio --version
	I1026 08:32:08.500178  273227 ssh_runner.go:195] Run: crio --version
	I1026 08:32:08.533024  273227 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Oct 26 08:31:27 embed-certs-752315 crio[564]: time="2025-10-26T08:31:27.350382036Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:31:27 embed-certs-752315 crio[564]: time="2025-10-26T08:31:27.55514526Z" level=info msg="Removing container: c2f733c838fe6eeb5c6bfc90137afd4de8c63e55aae945c1a408feffd4b5d1e2" id=8151b35e-549b-40db-a7be-092d0c672107 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:27 embed-certs-752315 crio[564]: time="2025-10-26T08:31:27.565191807Z" level=info msg="Removed container c2f733c838fe6eeb5c6bfc90137afd4de8c63e55aae945c1a408feffd4b5d1e2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd/dashboard-metrics-scraper" id=8151b35e-549b-40db-a7be-092d0c672107 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.488961933Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=05c15e9c-fc93-435f-b33c-fbc72f2ec74d name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.490047681Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0e235e0b-a87d-4b6b-a425-ab75c0ee8b77 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.49111269Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd/dashboard-metrics-scraper" id=0d78e862-c06f-4901-bc38-6f04b606081f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.491302981Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.499234604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.499933325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.531583959Z" level=info msg="Created container 03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd/dashboard-metrics-scraper" id=0d78e862-c06f-4901-bc38-6f04b606081f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.532233626Z" level=info msg="Starting container: 03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8" id=6a834d32-9ed2-4d21-8552-be8a8c55cb58 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.534519839Z" level=info msg="Started container" PID=1787 containerID=03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd/dashboard-metrics-scraper id=6a834d32-9ed2-4d21-8552-be8a8c55cb58 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9102aabbcb9d206d508039487c5f06c4c1108a5f5a5888e06693689237c6e70
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.61076536Z" level=info msg="Removing container: aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767" id=951e37dc-b240-4575-8348-ea28cff4b1fc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.612072618Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=575db176-d352-4c18-8321-01a9a0faa64f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.613592158Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2fc64f13-16ec-453f-8e56-e1c5e242025b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.615056594Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7aa7c2a3-d995-4ca9-9518-3f0e14781343 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.615350079Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.6239202Z" level=info msg="Removed container aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd/dashboard-metrics-scraper" id=951e37dc-b240-4575-8348-ea28cff4b1fc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.624894524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.625106855Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a0f65ecc2935c91ab63350018cebaf912a405a5d6d6d8185cd86dcbe5a3b6e0a/merged/etc/passwd: no such file or directory"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.625141904Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a0f65ecc2935c91ab63350018cebaf912a405a5d6d6d8185cd86dcbe5a3b6e0a/merged/etc/group: no such file or directory"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.625471055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.666779101Z" level=info msg="Created container cd0e34a9885583a9a29db7cdcc3d3a07ecdcf1caeb106520ab4774f551b50196: kube-system/storage-provisioner/storage-provisioner" id=7aa7c2a3-d995-4ca9-9518-3f0e14781343 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.667589571Z" level=info msg="Starting container: cd0e34a9885583a9a29db7cdcc3d3a07ecdcf1caeb106520ab4774f551b50196" id=94f05e62-4dc3-41d2-b725-0758de04eee6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.669936793Z" level=info msg="Started container" PID=1797 containerID=cd0e34a9885583a9a29db7cdcc3d3a07ecdcf1caeb106520ab4774f551b50196 description=kube-system/storage-provisioner/storage-provisioner id=94f05e62-4dc3-41d2-b725-0758de04eee6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8495a59ae7aad42c3db55b0ab731834c75f57919c3b46365467dabbee002979
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	cd0e34a988558       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         1                   d8495a59ae7aa       storage-provisioner                          kube-system
	03fbe11ac2956       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           22 seconds ago      Exited              dashboard-metrics-scraper   2                   b9102aabbcb9d       dashboard-metrics-scraper-6ffb444bf9-q6gjd   kubernetes-dashboard
	4b898bc10d22e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   07cb4dd7b9d75       kubernetes-dashboard-855c9754f9-7m27d        kubernetes-dashboard
	bbf52faf92933       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   9cbf9a8976dae       busybox                                      default
	16f5a20811e08       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     0                   d574c06d742c6       coredns-66bc5c9577-jktn8                     kube-system
	8fd71ca3934b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         0                   d8495a59ae7aa       storage-provisioner                          kube-system
	b59c79a123964       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 0                   d76800a42f360       kindnet-m4lzl                                kube-system
	9ad903d67dde6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           53 seconds ago      Running             kube-proxy                  0                   8efce21bda768       kube-proxy-5bf98                             kube-system
	b4e2a3adae3b2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   d52c81e7d1e3a       kube-controller-manager-embed-certs-752315   kube-system
	0aaa1f21f536e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   c80152f6d78a7       kube-apiserver-embed-certs-752315            kube-system
	412f2a653f74c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   757207105ac69       kube-scheduler-embed-certs-752315            kube-system
	53cccbff24b07       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   be2a67235a074       etcd-embed-certs-752315                      kube-system
	
	
	==> coredns [16f5a20811e08c9a87436b830181d07a08d7c9c19042686b547a09d115b7077e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40528 - 14662 "HINFO IN 5552716523772564236.1131787939876561468. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023549224s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-752315
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-752315
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=embed-certs-752315
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_30_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:30:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-752315
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:32:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:32:07 +0000   Sun, 26 Oct 2025 08:30:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:32:07 +0000   Sun, 26 Oct 2025 08:30:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:32:07 +0000   Sun, 26 Oct 2025 08:30:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:32:07 +0000   Sun, 26 Oct 2025 08:30:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-752315
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                cae690de-b1ed-4dcd-8194-03992c24069f
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-jktn8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-752315                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         112s
	  kube-system                 kindnet-m4lzl                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-752315             250m (3%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-controller-manager-embed-certs-752315    200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-5bf98                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-752315             100m (1%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-q6gjd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7m27d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 105s               kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  112s               kubelet          Node embed-certs-752315 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    112s               kubelet          Node embed-certs-752315 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     112s               kubelet          Node embed-certs-752315 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s               node-controller  Node embed-certs-752315 event: Registered Node embed-certs-752315 in Controller
	  Normal  NodeReady                95s                kubelet          Node embed-certs-752315 status is now: NodeReady
	  Normal  Starting                 57s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)  kubelet          Node embed-certs-752315 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)  kubelet          Node embed-certs-752315 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)  kubelet          Node embed-certs-752315 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node embed-certs-752315 event: Registered Node embed-certs-752315 in Controller
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [53cccbff24b074724ed929ecf8bf44f382faed357e2e31b19207adb2df85cf66] <==
	{"level":"warn","ts":"2025-10-26T08:31:15.263022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.271137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.278364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.285821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.292220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.299390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.305869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.313509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.321575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.329384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.336208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.342623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.348776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.354967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.361216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.375101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.382863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.389098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.396521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.404154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.416911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.423769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.431132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.480225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54894","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T08:31:31.141456Z","caller":"traceutil/trace.go:172","msg":"trace[445669958] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"131.987264ms","start":"2025-10-26T08:31:31.009448Z","end":"2025-10-26T08:31:31.141435Z","steps":["trace[445669958] 'process raft request'  (duration: 65.919112ms)","trace[445669958] 'compare'  (duration: 65.947986ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:32:10 up  1:14,  0 user,  load average: 4.56, 3.50, 2.18
	Linux embed-certs-752315 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b59c79a12396440c5b834d5c3f3895abb0777e31e4f19207a302ce038fb04e94] <==
	I1026 08:31:17.019592       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:31:17.112127       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1026 08:31:17.112384       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:31:17.112424       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:31:17.112455       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:31:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:31:17.316674       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:31:17.316756       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:31:17.316778       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:31:17.316906       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:31:17.708260       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:31:17.708342       1 metrics.go:72] Registering metrics
	I1026 08:31:17.708437       1 controller.go:711] "Syncing nftables rules"
	I1026 08:31:27.317534       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:31:27.317584       1 main.go:301] handling current node
	I1026 08:31:37.319656       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:31:37.319709       1 main.go:301] handling current node
	I1026 08:31:47.317448       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:31:47.317485       1 main.go:301] handling current node
	I1026 08:31:57.317526       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:31:57.317554       1 main.go:301] handling current node
	I1026 08:32:07.316627       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:32:07.316678       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0aaa1f21f536e556e63c92670b92d8a3ea70dc7a114b8586e7c128c24f8010e2] <==
	I1026 08:31:15.956161       1 aggregator.go:171] initial CRD sync complete...
	I1026 08:31:15.955777       1 policy_source.go:240] refreshing policies
	I1026 08:31:15.957758       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 08:31:15.958265       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 08:31:15.956409       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 08:31:15.956382       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 08:31:15.959681       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 08:31:15.959762       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 08:31:15.959803       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:31:15.958887       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 08:31:15.960084       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1026 08:31:15.963391       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 08:31:15.968321       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 08:31:15.991037       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:31:16.205542       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 08:31:16.234337       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:31:16.259523       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:31:16.265857       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:31:16.272978       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:31:16.323884       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.105.177"}
	I1026 08:31:16.334499       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.247.66"}
	I1026 08:31:16.862068       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:31:19.336016       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:31:19.437163       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:31:19.686483       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b4e2a3adae3b260f24bc34d1fbff56bfc90e781b00b3ef7ade7ad5a02580d3d2] <==
	I1026 08:31:19.265524       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 08:31:19.282262       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 08:31:19.282280       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 08:31:19.282308       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 08:31:19.283306       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 08:31:19.283346       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 08:31:19.283354       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 08:31:19.283481       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 08:31:19.283573       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-752315"
	I1026 08:31:19.283620       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 08:31:19.284020       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 08:31:19.285776       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 08:31:19.285805       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 08:31:19.285820       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 08:31:19.285860       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 08:31:19.285907       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 08:31:19.286947       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 08:31:19.288046       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 08:31:19.288099       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:31:19.303338       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:31:19.309712       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:31:19.309731       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 08:31:19.309740       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 08:31:19.311875       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 08:31:19.322695       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9ad903d67dde66294e4479668d0c5b6cf2ee2a72713eb621ec1ffceff453c1d3] <==
	I1026 08:31:16.901479       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:31:16.968924       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:31:17.069158       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:31:17.069201       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1026 08:31:17.069359       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:31:17.090832       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:31:17.090888       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:31:17.095776       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:31:17.096063       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:31:17.096088       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:31:17.097329       1 config.go:200] "Starting service config controller"
	I1026 08:31:17.097352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:31:17.097386       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:31:17.097392       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:31:17.097410       1 config.go:309] "Starting node config controller"
	I1026 08:31:17.097419       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:31:17.097426       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:31:17.097427       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:31:17.097440       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:31:17.198460       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:31:17.198577       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:31:17.198587       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [412f2a653f74cbf8314bc01c58e251aad5fd401f7370feb8ab90dacb1abcda0a] <==
	I1026 08:31:15.327390       1 serving.go:386] Generated self-signed cert in-memory
	I1026 08:31:16.235503       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 08:31:16.235534       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:31:16.240177       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 08:31:16.240190       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:31:16.240216       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 08:31:16.240217       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:31:16.240210       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 08:31:16.240316       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 08:31:16.240751       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 08:31:16.240788       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:31:16.341025       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:31:16.341037       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 08:31:16.341173       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:31:20 embed-certs-752315 kubelet[721]: I1026 08:31:20.025300     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4gnv\" (UniqueName: \"kubernetes.io/projected/c2ba33f0-784d-4cd9-9324-324155d48377-kube-api-access-c4gnv\") pod \"kubernetes-dashboard-855c9754f9-7m27d\" (UID: \"c2ba33f0-784d-4cd9-9324-324155d48377\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7m27d"
	Oct 26 08:31:23 embed-certs-752315 kubelet[721]: I1026 08:31:23.637088     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 08:31:25 embed-certs-752315 kubelet[721]: I1026 08:31:25.313689     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7m27d" podStartSLOduration=3.398766922 podStartE2EDuration="6.313663123s" podCreationTimestamp="2025-10-26 08:31:19 +0000 UTC" firstStartedPulling="2025-10-26 08:31:20.238666613 +0000 UTC m=+6.853087427" lastFinishedPulling="2025-10-26 08:31:23.153562819 +0000 UTC m=+9.767983628" observedRunningTime="2025-10-26 08:31:23.556481279 +0000 UTC m=+10.170902094" watchObservedRunningTime="2025-10-26 08:31:25.313663123 +0000 UTC m=+11.928083937"
	Oct 26 08:31:26 embed-certs-752315 kubelet[721]: I1026 08:31:26.549381     721 scope.go:117] "RemoveContainer" containerID="c2f733c838fe6eeb5c6bfc90137afd4de8c63e55aae945c1a408feffd4b5d1e2"
	Oct 26 08:31:27 embed-certs-752315 kubelet[721]: I1026 08:31:27.553699     721 scope.go:117] "RemoveContainer" containerID="c2f733c838fe6eeb5c6bfc90137afd4de8c63e55aae945c1a408feffd4b5d1e2"
	Oct 26 08:31:27 embed-certs-752315 kubelet[721]: I1026 08:31:27.553836     721 scope.go:117] "RemoveContainer" containerID="aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767"
	Oct 26 08:31:27 embed-certs-752315 kubelet[721]: E1026 08:31:27.554076     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6gjd_kubernetes-dashboard(1f5ff53f-3467-4f0a-9e64-63941f09bdfa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd" podUID="1f5ff53f-3467-4f0a-9e64-63941f09bdfa"
	Oct 26 08:31:28 embed-certs-752315 kubelet[721]: I1026 08:31:28.558708     721 scope.go:117] "RemoveContainer" containerID="aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767"
	Oct 26 08:31:28 embed-certs-752315 kubelet[721]: E1026 08:31:28.558877     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6gjd_kubernetes-dashboard(1f5ff53f-3467-4f0a-9e64-63941f09bdfa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd" podUID="1f5ff53f-3467-4f0a-9e64-63941f09bdfa"
	Oct 26 08:31:31 embed-certs-752315 kubelet[721]: I1026 08:31:31.966167     721 scope.go:117] "RemoveContainer" containerID="aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767"
	Oct 26 08:31:31 embed-certs-752315 kubelet[721]: E1026 08:31:31.966386     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6gjd_kubernetes-dashboard(1f5ff53f-3467-4f0a-9e64-63941f09bdfa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd" podUID="1f5ff53f-3467-4f0a-9e64-63941f09bdfa"
	Oct 26 08:31:47 embed-certs-752315 kubelet[721]: I1026 08:31:47.488435     721 scope.go:117] "RemoveContainer" containerID="aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767"
	Oct 26 08:31:47 embed-certs-752315 kubelet[721]: I1026 08:31:47.609374     721 scope.go:117] "RemoveContainer" containerID="aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767"
	Oct 26 08:31:47 embed-certs-752315 kubelet[721]: I1026 08:31:47.609616     721 scope.go:117] "RemoveContainer" containerID="03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8"
	Oct 26 08:31:47 embed-certs-752315 kubelet[721]: E1026 08:31:47.609815     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6gjd_kubernetes-dashboard(1f5ff53f-3467-4f0a-9e64-63941f09bdfa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd" podUID="1f5ff53f-3467-4f0a-9e64-63941f09bdfa"
	Oct 26 08:31:47 embed-certs-752315 kubelet[721]: I1026 08:31:47.611174     721 scope.go:117] "RemoveContainer" containerID="8fd71ca3934b0c337a8942ef6b2577f1a2eb884b4dd3e8c1621585332293a357"
	Oct 26 08:31:51 embed-certs-752315 kubelet[721]: I1026 08:31:51.966836     721 scope.go:117] "RemoveContainer" containerID="03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8"
	Oct 26 08:31:51 embed-certs-752315 kubelet[721]: E1026 08:31:51.966984     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6gjd_kubernetes-dashboard(1f5ff53f-3467-4f0a-9e64-63941f09bdfa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd" podUID="1f5ff53f-3467-4f0a-9e64-63941f09bdfa"
	Oct 26 08:32:03 embed-certs-752315 kubelet[721]: I1026 08:32:03.488725     721 scope.go:117] "RemoveContainer" containerID="03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8"
	Oct 26 08:32:03 embed-certs-752315 kubelet[721]: E1026 08:32:03.488959     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6gjd_kubernetes-dashboard(1f5ff53f-3467-4f0a-9e64-63941f09bdfa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd" podUID="1f5ff53f-3467-4f0a-9e64-63941f09bdfa"
	Oct 26 08:32:07 embed-certs-752315 kubelet[721]: I1026 08:32:07.680126     721 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 26 08:32:07 embed-certs-752315 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 08:32:07 embed-certs-752315 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 08:32:07 embed-certs-752315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 08:32:07 embed-certs-752315 systemd[1]: kubelet.service: Consumed 1.734s CPU time.
	
	
	==> kubernetes-dashboard [4b898bc10d22ebec112eb26c1c60033644c1c9521519a40efded7e7d0fb11a33] <==
	2025/10/26 08:31:23 Using namespace: kubernetes-dashboard
	2025/10/26 08:31:23 Using in-cluster config to connect to apiserver
	2025/10/26 08:31:23 Using secret token for csrf signing
	2025/10/26 08:31:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 08:31:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 08:31:23 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 08:31:23 Generating JWE encryption key
	2025/10/26 08:31:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 08:31:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 08:31:23 Initializing JWE encryption key from synchronized object
	2025/10/26 08:31:23 Creating in-cluster Sidecar client
	2025/10/26 08:31:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:31:23 Serving insecurely on HTTP port: 9090
	2025/10/26 08:31:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:31:23 Starting overwatch
	
	
	==> storage-provisioner [8fd71ca3934b0c337a8942ef6b2577f1a2eb884b4dd3e8c1621585332293a357] <==
	I1026 08:31:16.869507       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 08:31:46.873637       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cd0e34a9885583a9a29db7cdcc3d3a07ecdcf1caeb106520ab4774f551b50196] <==
	I1026 08:31:47.684311       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 08:31:47.693805       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 08:31:47.693909       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 08:31:47.697189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:51.152891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:55.413876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:59.012984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:02.067414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:05.089619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:05.096047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:32:05.096164       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 08:32:05.096336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-752315_6df4a833-2076-405d-9339-0c93df2fad95!
	I1026 08:32:05.096306       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf86141f-07c1-4e09-9431-3b0349d6fa2c", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-752315_6df4a833-2076-405d-9339-0c93df2fad95 became leader
	W1026 08:32:05.098478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:05.101836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:32:05.196616       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-752315_6df4a833-2076-405d-9339-0c93df2fad95!
	W1026 08:32:07.104828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:07.110198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:09.114281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:09.118389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-752315 -n embed-certs-752315
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-752315 -n embed-certs-752315: exit status 2 (366.819049ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-752315 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-752315
helpers_test.go:243: (dbg) docker inspect embed-certs-752315:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215",
	        "Created": "2025-10-26T08:30:03.656841768Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 258742,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:31:06.064033845Z",
	            "FinishedAt": "2025-10-26T08:31:05.100205627Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215/hostname",
	        "HostsPath": "/var/lib/docker/containers/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215/hosts",
	        "LogPath": "/var/lib/docker/containers/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215/8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215-json.log",
	        "Name": "/embed-certs-752315",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-752315:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-752315",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "8eca8953ad72ea9a9b4d4a999033961da2315c86ddf66925637b226afd778215",
	                "LowerDir": "/var/lib/docker/overlay2/6845dbb109d8d0c47760eee1a1982a045182bb149bbb770f01a93faa904cde6f-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/6845dbb109d8d0c47760eee1a1982a045182bb149bbb770f01a93faa904cde6f/merged",
	                "UpperDir": "/var/lib/docker/overlay2/6845dbb109d8d0c47760eee1a1982a045182bb149bbb770f01a93faa904cde6f/diff",
	                "WorkDir": "/var/lib/docker/overlay2/6845dbb109d8d0c47760eee1a1982a045182bb149bbb770f01a93faa904cde6f/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-752315",
	                "Source": "/var/lib/docker/volumes/embed-certs-752315/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-752315",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-752315",
	                "name.minikube.sigs.k8s.io": "embed-certs-752315",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "220ac7a6a1664bed31842bddcd77b605efc1e7e095f15219c0e3836ad97ff4d5",
	            "SandboxKey": "/var/run/docker/netns/220ac7a6a166",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33079"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33082"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33080"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33081"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-752315": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "f6:7b:9e:9e:f7:a1",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d5aa8ca4605176daf87c9c9f24c1c35f5c6618444861770e8529506402674500",
	                    "EndpointID": "c058fcf50929a5e59f5494e75870dd9dd045cd9154e6c62f2c61f86c0a87e206",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-752315",
	                        "8eca8953ad72"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-752315 -n embed-certs-752315
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-752315 -n embed-certs-752315: exit status 2 (336.285795ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-752315 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-752315 logs -n 25: (1.226989828s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p no-preload-001983 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p no-preload-001983 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ addons  │ enable metrics-server -p embed-certs-752315 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │                     │
	│ stop    │ -p embed-certs-752315 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable dashboard -p no-preload-001983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-752315 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ image   │ old-k8s-version-810379 image list --format=json                                                                                                                                                                                               │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p old-k8s-version-810379 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p old-k8s-version-810379                                                                                                                                                                                                                     │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ delete  │ -p old-k8s-version-810379                                                                                                                                                                                                                     │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ delete  │ -p disable-driver-mounts-209240                                                                                                                                                                                                               │ disable-driver-mounts-209240 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p default-k8s-diff-port-866212 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ image   │ no-preload-001983 image list --format=json                                                                                                                                                                                                    │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p no-preload-001983 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-462840                                                                                                                                                                                                                  │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p no-preload-001983                                                                                                                                                                                                                          │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p no-preload-001983                                                                                                                                                                                                                          │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p auto-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ image   │ embed-certs-752315 image list --format=json                                                                                                                                                                                                   │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ pause   │ -p embed-certs-752315 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:32:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:32:00.490412  273227 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:32:00.490682  273227 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:00.490694  273227 out.go:374] Setting ErrFile to fd 2...
	I1026 08:32:00.490699  273227 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:00.490990  273227 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:32:00.491492  273227 out.go:368] Setting JSON to false
	I1026 08:32:00.492613  273227 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4471,"bootTime":1761463049,"procs":337,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:32:00.492697  273227 start.go:141] virtualization: kvm guest
	I1026 08:32:00.494601  273227 out.go:179] * [auto-110992] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:32:00.496107  273227 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:32:00.496095  273227 notify.go:220] Checking for updates...
	I1026 08:32:00.501725  273227 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:32:00.502963  273227 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:32:00.504471  273227 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:32:00.505791  273227 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:32:00.506891  273227 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:32:00.508773  273227 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:00.508927  273227 config.go:182] Loaded profile config "embed-certs-752315": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:00.509084  273227 config.go:182] Loaded profile config "newest-cni-366970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:00.509207  273227 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:32:00.535430  273227 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:32:00.535553  273227 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:00.594940  273227 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 08:32:00.584446159 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:00.595077  273227 docker.go:318] overlay module found
	I1026 08:32:00.597081  273227 out.go:179] * Using the docker driver based on user configuration
	I1026 08:32:00.598487  273227 start.go:305] selected driver: docker
	I1026 08:32:00.598508  273227 start.go:925] validating driver "docker" against <nil>
	I1026 08:32:00.598523  273227 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:32:00.599365  273227 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:00.658825  273227 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 08:32:00.648902819 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:00.658982  273227 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:32:00.659211  273227 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:32:00.661215  273227 out.go:179] * Using Docker driver with root privileges
	I1026 08:32:00.662512  273227 cni.go:84] Creating CNI manager for ""
	I1026 08:32:00.662576  273227 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:32:00.662587  273227 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 08:32:00.662652  273227 start.go:349] cluster config:
	{Name:auto-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1026 08:32:00.664056  273227 out.go:179] * Starting "auto-110992" primary control-plane node in "auto-110992" cluster
	I1026 08:32:00.665333  273227 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:32:00.666648  273227 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:32:00.667844  273227 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:00.667887  273227 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:32:00.667884  273227 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:32:00.667896  273227 cache.go:58] Caching tarball of preloaded images
	I1026 08:32:00.668006  273227 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:32:00.668020  273227 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:32:00.668137  273227 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/config.json ...
	I1026 08:32:00.668160  273227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/config.json: {Name:mk9a603c818bfb8aee3ce9258672b2a135ca6e1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:00.689081  273227 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:32:00.689100  273227 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:32:00.689119  273227 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:32:00.689150  273227 start.go:360] acquireMachinesLock for auto-110992: {Name:mk20dec79305eb324248958d5953c5e7e46e96f4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:32:00.689271  273227 start.go:364] duration metric: took 81.294µs to acquireMachinesLock for "auto-110992"
	I1026 08:32:00.689303  273227 start.go:93] Provisioning new machine with config: &{Name:auto-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-110992 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:32:00.689377  273227 start.go:125] createHost starting for "" (driver="docker")
	W1026 08:31:58.286439  264509 node_ready.go:57] node "default-k8s-diff-port-866212" has "Ready":"False" status (will retry)
	W1026 08:32:00.786204  264509 node_ready.go:57] node "default-k8s-diff-port-866212" has "Ready":"False" status (will retry)
	I1026 08:31:59.004334  270203 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-366970:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.790969179s)
	I1026 08:31:59.004368  270203 kic.go:203] duration metric: took 4.791173084s to extract preloaded images to volume ...
	W1026 08:31:59.004456  270203 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 08:31:59.004503  270203 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 08:31:59.004547  270203 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 08:31:59.063210  270203 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-366970 --name newest-cni-366970 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-366970 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-366970 --network newest-cni-366970 --ip 192.168.85.2 --volume newest-cni-366970:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 08:31:59.359560  270203 cli_runner.go:164] Run: docker container inspect newest-cni-366970 --format={{.State.Running}}
	I1026 08:31:59.379079  270203 cli_runner.go:164] Run: docker container inspect newest-cni-366970 --format={{.State.Status}}
	I1026 08:31:59.399125  270203 cli_runner.go:164] Run: docker exec newest-cni-366970 stat /var/lib/dpkg/alternatives/iptables
	I1026 08:31:59.444850  270203 oci.go:144] the created container "newest-cni-366970" has a running status.
	I1026 08:31:59.444883  270203 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa...
	I1026 08:31:59.791240  270203 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 08:31:59.999317  270203 cli_runner.go:164] Run: docker container inspect newest-cni-366970 --format={{.State.Status}}
	I1026 08:32:00.019575  270203 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 08:32:00.019599  270203 kic_runner.go:114] Args: [docker exec --privileged newest-cni-366970 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 08:32:00.071768  270203 cli_runner.go:164] Run: docker container inspect newest-cni-366970 --format={{.State.Status}}
	I1026 08:32:00.091906  270203 machine.go:93] provisionDockerMachine start ...
	I1026 08:32:00.092008  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:00.111533  270203 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:00.111811  270203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1026 08:32:00.111830  270203 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:32:00.258348  270203 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-366970
	
	I1026 08:32:00.258378  270203 ubuntu.go:182] provisioning hostname "newest-cni-366970"
	I1026 08:32:00.258437  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:00.278883  270203 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:00.279120  270203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1026 08:32:00.279140  270203 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-366970 && echo "newest-cni-366970" | sudo tee /etc/hostname
	I1026 08:32:00.431811  270203 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-366970
	
	I1026 08:32:00.431923  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:00.451034  270203 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:00.451236  270203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1026 08:32:00.451293  270203 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-366970' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-366970/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-366970' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:32:00.597463  270203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:32:00.597487  270203 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:32:00.597517  270203 ubuntu.go:190] setting up certificates
	I1026 08:32:00.597530  270203 provision.go:84] configureAuth start
	I1026 08:32:00.597596  270203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-366970
	I1026 08:32:00.618302  270203 provision.go:143] copyHostCerts
	I1026 08:32:00.618371  270203 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:32:00.618381  270203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:32:00.618473  270203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:32:00.618615  270203 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:32:00.618625  270203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:32:00.618668  270203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:32:00.618773  270203 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:32:00.618794  270203 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:32:00.618839  270203 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:32:00.618929  270203 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.newest-cni-366970 san=[127.0.0.1 192.168.85.2 localhost minikube newest-cni-366970]
	I1026 08:32:00.759440  270203 provision.go:177] copyRemoteCerts
	I1026 08:32:00.759513  270203 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:32:00.759559  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:00.781259  270203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:00.885539  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:32:00.907147  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1026 08:32:00.925481  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 08:32:00.945519  270203 provision.go:87] duration metric: took 347.972983ms to configureAuth
	I1026 08:32:00.945554  270203 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:32:00.945746  270203 config.go:182] Loaded profile config "newest-cni-366970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:00.945854  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:00.967537  270203 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:00.967732  270203 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33091 <nil> <nil>}
	I1026 08:32:00.967749  270203 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:32:01.251501  270203 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:32:01.251539  270203 machine.go:96] duration metric: took 1.159607829s to provisionDockerMachine
	I1026 08:32:01.251554  270203 client.go:171] duration metric: took 7.731687004s to LocalClient.Create
	I1026 08:32:01.251579  270203 start.go:167] duration metric: took 7.731761794s to libmachine.API.Create "newest-cni-366970"
	I1026 08:32:01.251593  270203 start.go:293] postStartSetup for "newest-cni-366970" (driver="docker")
	I1026 08:32:01.251606  270203 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:32:01.251671  270203 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:32:01.251719  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:01.273398  270203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:01.383963  270203 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:32:01.388177  270203 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:32:01.388213  270203 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:32:01.388225  270203 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:32:01.388295  270203 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:32:01.388385  270203 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:32:01.388477  270203 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:32:01.398827  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:32:01.428207  270203 start.go:296] duration metric: took 176.591146ms for postStartSetup
	I1026 08:32:01.428588  270203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-366970
	I1026 08:32:01.448722  270203 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/config.json ...
	I1026 08:32:01.448991  270203 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:32:01.449040  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:01.472013  270203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:01.573716  270203 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:32:01.578740  270203 start.go:128] duration metric: took 8.061588923s to createHost
	I1026 08:32:01.578772  270203 start.go:83] releasing machines lock for "newest-cni-366970", held for 8.061735624s
	I1026 08:32:01.578853  270203 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-366970
	I1026 08:32:01.598742  270203 ssh_runner.go:195] Run: cat /version.json
	I1026 08:32:01.598795  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:01.598825  270203 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:32:01.598891  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:01.621535  270203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:01.621866  270203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:01.778373  270203 ssh_runner.go:195] Run: systemctl --version
	I1026 08:32:01.785728  270203 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:32:01.827382  270203 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:32:01.832449  270203 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:32:01.832520  270203 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:32:01.864376  270203 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 08:32:01.864404  270203 start.go:495] detecting cgroup driver to use...
	I1026 08:32:01.864439  270203 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:32:01.864490  270203 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:32:01.881218  270203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:32:01.894614  270203 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:32:01.894689  270203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:32:01.913482  270203 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:32:01.936797  270203 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:32:02.028291  270203 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:32:02.149718  270203 docker.go:234] disabling docker service ...
	I1026 08:32:02.149781  270203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:32:02.169961  270203 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:32:02.183155  270203 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:32:02.284044  270203 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:32:02.376715  270203 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:32:02.390520  270203 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:32:02.405215  270203 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:32:02.405307  270203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.417807  270203 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:32:02.417880  270203 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.427528  270203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.436858  270203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.446656  270203 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:32:02.455123  270203 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.464292  270203 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.478476  270203 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:02.487632  270203 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:32:02.495385  270203 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:32:02.503233  270203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:32:02.587450  270203 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:32:04.932480  270203 ssh_runner.go:235] Completed: sudo systemctl restart crio: (2.34498891s)
	I1026 08:32:04.932524  270203 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:32:04.932582  270203 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:32:04.937612  270203 start.go:563] Will wait 60s for crictl version
	I1026 08:32:04.937673  270203 ssh_runner.go:195] Run: which crictl
	I1026 08:32:04.942050  270203 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:32:04.970730  270203 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:32:04.970807  270203 ssh_runner.go:195] Run: crio --version
	I1026 08:32:05.001197  270203 ssh_runner.go:195] Run: crio --version
	I1026 08:32:05.036425  270203 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:32:05.039927  270203 cli_runner.go:164] Run: docker network inspect newest-cni-366970 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:32:05.059362  270203 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1026 08:32:05.063577  270203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:32:05.077595  270203 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1026 08:32:00.692208  273227 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 08:32:00.692484  273227 start.go:159] libmachine.API.Create for "auto-110992" (driver="docker")
	I1026 08:32:00.692547  273227 client.go:168] LocalClient.Create starting
	I1026 08:32:00.692647  273227 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem
	I1026 08:32:00.692693  273227 main.go:141] libmachine: Decoding PEM data...
	I1026 08:32:00.692718  273227 main.go:141] libmachine: Parsing certificate...
	I1026 08:32:00.692792  273227 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem
	I1026 08:32:00.692822  273227 main.go:141] libmachine: Decoding PEM data...
	I1026 08:32:00.692838  273227 main.go:141] libmachine: Parsing certificate...
	I1026 08:32:00.693231  273227 cli_runner.go:164] Run: docker network inspect auto-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 08:32:00.711498  273227 cli_runner.go:211] docker network inspect auto-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 08:32:00.711573  273227 network_create.go:284] running [docker network inspect auto-110992] to gather additional debugging logs...
	I1026 08:32:00.711593  273227 cli_runner.go:164] Run: docker network inspect auto-110992
	W1026 08:32:00.728518  273227 cli_runner.go:211] docker network inspect auto-110992 returned with exit code 1
	I1026 08:32:00.728558  273227 network_create.go:287] error running [docker network inspect auto-110992]: docker network inspect auto-110992: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-110992 not found
	I1026 08:32:00.728575  273227 network_create.go:289] output of [docker network inspect auto-110992]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-110992 not found
	
	** /stderr **
	I1026 08:32:00.728665  273227 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:32:00.746727  273227 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c18b67b7e42d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:70:41:72:e4:6d} reservation:<nil>}
	I1026 08:32:00.747494  273227 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd6ed9f615a5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:78:96:65:8c:60} reservation:<nil>}
	I1026 08:32:00.748216  273227 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2a983bf4577 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:62:ae:31:43:82} reservation:<nil>}
	I1026 08:32:00.749006  273227 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ee58c0}
	I1026 08:32:00.749028  273227 network_create.go:124] attempt to create docker network auto-110992 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 08:32:00.749074  273227 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-110992 auto-110992
	I1026 08:32:00.814237  273227 network_create.go:108] docker network auto-110992 192.168.76.0/24 created
	I1026 08:32:00.814294  273227 kic.go:121] calculated static IP "192.168.76.2" for the "auto-110992" container
	I1026 08:32:00.814358  273227 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 08:32:00.833280  273227 cli_runner.go:164] Run: docker volume create auto-110992 --label name.minikube.sigs.k8s.io=auto-110992 --label created_by.minikube.sigs.k8s.io=true
	I1026 08:32:00.852227  273227 oci.go:103] Successfully created a docker volume auto-110992
	I1026 08:32:00.852342  273227 cli_runner.go:164] Run: docker run --rm --name auto-110992-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-110992 --entrypoint /usr/bin/test -v auto-110992:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 08:32:01.269602  273227 oci.go:107] Successfully prepared a docker volume auto-110992
	I1026 08:32:01.269658  273227 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:01.269685  273227 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 08:32:01.269747  273227 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-110992:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 08:32:04.845492  273227 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-110992:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (3.575692475s)
	I1026 08:32:04.845540  273227 kic.go:203] duration metric: took 3.575850558s to extract preloaded images to volume ...
	W1026 08:32:04.845629  273227 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 08:32:04.845658  273227 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 08:32:04.845694  273227 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 08:32:04.907470  273227 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-110992 --name auto-110992 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-110992 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-110992 --network auto-110992 --ip 192.168.76.2 --volume auto-110992:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 08:32:05.202203  273227 cli_runner.go:164] Run: docker container inspect auto-110992 --format={{.State.Running}}
	I1026 08:32:05.221535  273227 cli_runner.go:164] Run: docker container inspect auto-110992 --format={{.State.Status}}
	I1026 08:32:05.241808  273227 cli_runner.go:164] Run: docker exec auto-110992 stat /var/lib/dpkg/alternatives/iptables
	I1026 08:32:05.290823  273227 oci.go:144] the created container "auto-110992" has a running status.
	I1026 08:32:05.290863  273227 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa...
	I1026 08:32:05.477072  273227 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	W1026 08:32:03.286431  264509 node_ready.go:57] node "default-k8s-diff-port-866212" has "Ready":"False" status (will retry)
	I1026 08:32:05.786495  264509 node_ready.go:49] node "default-k8s-diff-port-866212" is "Ready"
	I1026 08:32:05.786522  264509 node_ready.go:38] duration metric: took 11.503921605s for node "default-k8s-diff-port-866212" to be "Ready" ...
	I1026 08:32:05.786536  264509 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:32:05.786590  264509 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:32:05.801570  264509 api_server.go:72] duration metric: took 11.881600858s to wait for apiserver process to appear ...
	I1026 08:32:05.801598  264509 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:32:05.801620  264509 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8444/healthz ...
	I1026 08:32:05.806467  264509 api_server.go:279] https://192.168.94.2:8444/healthz returned 200:
	ok
	I1026 08:32:05.807576  264509 api_server.go:141] control plane version: v1.34.1
	I1026 08:32:05.807605  264509 api_server.go:131] duration metric: took 5.998272ms to wait for apiserver health ...
	I1026 08:32:05.807616  264509 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:32:05.811419  264509 system_pods.go:59] 8 kube-system pods found
	I1026 08:32:05.811455  264509 system_pods.go:61] "coredns-66bc5c9577-h4dk5" [18fbe340-fefc-49cc-9816-4af780af38c5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:05.811464  264509 system_pods.go:61] "etcd-default-k8s-diff-port-866212" [8c44096f-2caa-4b06-8008-833c59cb7f25] Running
	I1026 08:32:05.811470  264509 system_pods.go:61] "kindnet-vr7fg" [c665249b-007a-4348-8905-c4ba71426d5c] Running
	I1026 08:32:05.811474  264509 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-866212" [83aa3d99-80e0-4549-a9d5-d5f4b309a928] Running
	I1026 08:32:05.811478  264509 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-866212" [400815d7-5817-430e-b1f6-0b0b34f79556] Running
	I1026 08:32:05.811481  264509 system_pods.go:61] "kube-proxy-m4gfc" [029bb2f9-cc20-4deb-8eca-da1405fd2c84] Running
	I1026 08:32:05.811485  264509 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-866212" [62690e71-a96d-4050-a717-f4ebdd785342] Running
	I1026 08:32:05.811490  264509 system_pods.go:61] "storage-provisioner" [a87f2f9f-e47d-4081-b53e-0b0017e791ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:05.811496  264509 system_pods.go:74] duration metric: took 3.874056ms to wait for pod list to return data ...
	I1026 08:32:05.811506  264509 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:32:05.813910  264509 default_sa.go:45] found service account: "default"
	I1026 08:32:05.813930  264509 default_sa.go:55] duration metric: took 2.417606ms for default service account to be created ...
	I1026 08:32:05.813939  264509 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:32:05.819794  264509 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:05.819834  264509 system_pods.go:89] "coredns-66bc5c9577-h4dk5" [18fbe340-fefc-49cc-9816-4af780af38c5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:05.819842  264509 system_pods.go:89] "etcd-default-k8s-diff-port-866212" [8c44096f-2caa-4b06-8008-833c59cb7f25] Running
	I1026 08:32:05.819852  264509 system_pods.go:89] "kindnet-vr7fg" [c665249b-007a-4348-8905-c4ba71426d5c] Running
	I1026 08:32:05.819859  264509 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866212" [83aa3d99-80e0-4549-a9d5-d5f4b309a928] Running
	I1026 08:32:05.819864  264509 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866212" [400815d7-5817-430e-b1f6-0b0b34f79556] Running
	I1026 08:32:05.819880  264509 system_pods.go:89] "kube-proxy-m4gfc" [029bb2f9-cc20-4deb-8eca-da1405fd2c84] Running
	I1026 08:32:05.819887  264509 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866212" [62690e71-a96d-4050-a717-f4ebdd785342] Running
	I1026 08:32:05.819899  264509 system_pods.go:89] "storage-provisioner" [a87f2f9f-e47d-4081-b53e-0b0017e791ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:05.819930  264509 retry.go:31] will retry after 277.373967ms: missing components: kube-dns
	I1026 08:32:06.100912  264509 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:06.100951  264509 system_pods.go:89] "coredns-66bc5c9577-h4dk5" [18fbe340-fefc-49cc-9816-4af780af38c5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:06.100959  264509 system_pods.go:89] "etcd-default-k8s-diff-port-866212" [8c44096f-2caa-4b06-8008-833c59cb7f25] Running
	I1026 08:32:06.100967  264509 system_pods.go:89] "kindnet-vr7fg" [c665249b-007a-4348-8905-c4ba71426d5c] Running
	I1026 08:32:06.100972  264509 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866212" [83aa3d99-80e0-4549-a9d5-d5f4b309a928] Running
	I1026 08:32:06.100977  264509 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866212" [400815d7-5817-430e-b1f6-0b0b34f79556] Running
	I1026 08:32:06.100983  264509 system_pods.go:89] "kube-proxy-m4gfc" [029bb2f9-cc20-4deb-8eca-da1405fd2c84] Running
	I1026 08:32:06.100988  264509 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866212" [62690e71-a96d-4050-a717-f4ebdd785342] Running
	I1026 08:32:06.100995  264509 system_pods.go:89] "storage-provisioner" [a87f2f9f-e47d-4081-b53e-0b0017e791ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:06.101017  264509 retry.go:31] will retry after 261.67719ms: missing components: kube-dns
	I1026 08:32:06.367271  264509 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:06.367301  264509 system_pods.go:89] "coredns-66bc5c9577-h4dk5" [18fbe340-fefc-49cc-9816-4af780af38c5] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:06.367307  264509 system_pods.go:89] "etcd-default-k8s-diff-port-866212" [8c44096f-2caa-4b06-8008-833c59cb7f25] Running
	I1026 08:32:06.367312  264509 system_pods.go:89] "kindnet-vr7fg" [c665249b-007a-4348-8905-c4ba71426d5c] Running
	I1026 08:32:06.367322  264509 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866212" [83aa3d99-80e0-4549-a9d5-d5f4b309a928] Running
	I1026 08:32:06.367328  264509 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866212" [400815d7-5817-430e-b1f6-0b0b34f79556] Running
	I1026 08:32:06.367335  264509 system_pods.go:89] "kube-proxy-m4gfc" [029bb2f9-cc20-4deb-8eca-da1405fd2c84] Running
	I1026 08:32:06.367340  264509 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866212" [62690e71-a96d-4050-a717-f4ebdd785342] Running
	I1026 08:32:06.367348  264509 system_pods.go:89] "storage-provisioner" [a87f2f9f-e47d-4081-b53e-0b0017e791ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:06.367370  264509 retry.go:31] will retry after 377.967656ms: missing components: kube-dns
	I1026 08:32:06.749498  264509 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:06.749538  264509 system_pods.go:89] "coredns-66bc5c9577-h4dk5" [18fbe340-fefc-49cc-9816-4af780af38c5] Running
	I1026 08:32:06.749551  264509 system_pods.go:89] "etcd-default-k8s-diff-port-866212" [8c44096f-2caa-4b06-8008-833c59cb7f25] Running
	I1026 08:32:06.749559  264509 system_pods.go:89] "kindnet-vr7fg" [c665249b-007a-4348-8905-c4ba71426d5c] Running
	I1026 08:32:06.749565  264509 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-866212" [83aa3d99-80e0-4549-a9d5-d5f4b309a928] Running
	I1026 08:32:06.749571  264509 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-866212" [400815d7-5817-430e-b1f6-0b0b34f79556] Running
	I1026 08:32:06.749575  264509 system_pods.go:89] "kube-proxy-m4gfc" [029bb2f9-cc20-4deb-8eca-da1405fd2c84] Running
	I1026 08:32:06.749580  264509 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-866212" [62690e71-a96d-4050-a717-f4ebdd785342] Running
	I1026 08:32:06.749584  264509 system_pods.go:89] "storage-provisioner" [a87f2f9f-e47d-4081-b53e-0b0017e791ae] Running
	I1026 08:32:06.749595  264509 system_pods.go:126] duration metric: took 935.648535ms to wait for k8s-apps to be running ...
	I1026 08:32:06.749604  264509 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:32:06.749655  264509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:32:06.766064  264509 system_svc.go:56] duration metric: took 16.4492ms WaitForService to wait for kubelet
	I1026 08:32:06.766107  264509 kubeadm.go:586] duration metric: took 12.846142972s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:32:06.766130  264509 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:32:06.769049  264509 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:32:06.769077  264509 node_conditions.go:123] node cpu capacity is 8
	I1026 08:32:06.769093  264509 node_conditions.go:105] duration metric: took 2.957803ms to run NodePressure ...
	I1026 08:32:06.769108  264509 start.go:241] waiting for startup goroutines ...
	I1026 08:32:06.769119  264509 start.go:246] waiting for cluster config update ...
	I1026 08:32:06.769133  264509 start.go:255] writing updated cluster config ...
	I1026 08:32:06.769438  264509 ssh_runner.go:195] Run: rm -f paused
	I1026 08:32:06.773831  264509 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:32:06.778652  264509 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h4dk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:06.783420  264509 pod_ready.go:94] pod "coredns-66bc5c9577-h4dk5" is "Ready"
	I1026 08:32:06.783443  264509 pod_ready.go:86] duration metric: took 4.769673ms for pod "coredns-66bc5c9577-h4dk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:06.785740  264509 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:06.790417  264509 pod_ready.go:94] pod "etcd-default-k8s-diff-port-866212" is "Ready"
	I1026 08:32:06.790439  264509 pod_ready.go:86] duration metric: took 4.67904ms for pod "etcd-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:06.792609  264509 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:06.796907  264509 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-866212" is "Ready"
	I1026 08:32:06.796929  264509 pod_ready.go:86] duration metric: took 4.29219ms for pod "kube-apiserver-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:06.798851  264509 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:07.179385  264509 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-866212" is "Ready"
	I1026 08:32:07.179417  264509 pod_ready.go:86] duration metric: took 380.546105ms for pod "kube-controller-manager-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:07.379177  264509 pod_ready.go:83] waiting for pod "kube-proxy-m4gfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:07.778473  264509 pod_ready.go:94] pod "kube-proxy-m4gfc" is "Ready"
	I1026 08:32:07.778499  264509 pod_ready.go:86] duration metric: took 399.292251ms for pod "kube-proxy-m4gfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:05.078989  270203 kubeadm.go:883] updating cluster {Name:newest-cni-366970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-366970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:32:05.079154  270203 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:05.079234  270203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:32:05.119971  270203 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:32:05.119994  270203 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:32:05.120041  270203 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:32:05.149466  270203 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:32:05.149489  270203 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:32:05.149499  270203 kubeadm.go:934] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1026 08:32:05.149590  270203 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-366970 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-366970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:32:05.149671  270203 ssh_runner.go:195] Run: crio config
	I1026 08:32:05.200918  270203 cni.go:84] Creating CNI manager for ""
	I1026 08:32:05.200941  270203 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:32:05.200957  270203 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1026 08:32:05.200979  270203 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-366970 NodeName:newest-cni-366970 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:32:05.201132  270203 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-366970"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:32:05.201193  270203 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:32:05.210646  270203 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:32:05.210714  270203 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:32:05.219556  270203 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1026 08:32:05.234046  270203 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:32:05.251904  270203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1026 08:32:05.267439  270203 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:32:05.271514  270203 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:32:05.281588  270203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:32:05.377197  270203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:32:05.409038  270203 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970 for IP: 192.168.85.2
	I1026 08:32:05.409061  270203 certs.go:195] generating shared ca certs ...
	I1026 08:32:05.409080  270203 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:05.409234  270203 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:32:05.409303  270203 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:32:05.409315  270203 certs.go:257] generating profile certs ...
	I1026 08:32:05.409377  270203 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/client.key
	I1026 08:32:05.409396  270203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/client.crt with IP's: []
	I1026 08:32:05.737016  270203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/client.crt ...
	I1026 08:32:05.737042  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/client.crt: {Name:mked65a6c31d8090d1294b99baec89ed05a55f1a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:05.737217  270203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/client.key ...
	I1026 08:32:05.737233  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/client.key: {Name:mk1775f3987869ce392487a7e3e3ef4d1bec339a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:05.737380  270203 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.key.8b551237
	I1026 08:32:05.737421  270203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.crt.8b551237 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1026 08:32:05.968221  270203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.crt.8b551237 ...
	I1026 08:32:05.968265  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.crt.8b551237: {Name:mk44b72c6f2d5ba13c5724b22d45f17fdad9f076 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:05.968415  270203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.key.8b551237 ...
	I1026 08:32:05.968429  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.key.8b551237: {Name:mk7ada4f016025b914a1cd6eaf66cd7314d045a1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:05.968534  270203 certs.go:382] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.crt.8b551237 -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.crt
	I1026 08:32:05.968612  270203 certs.go:386] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.key.8b551237 -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.key
	I1026 08:32:05.968671  270203 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.key
	I1026 08:32:05.968687  270203 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.crt with IP's: []
	I1026 08:32:06.482626  270203 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.crt ...
	I1026 08:32:06.482657  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.crt: {Name:mk302511fc3bdeeb11e202f01e4b462222003b7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:06.482827  270203 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.key ...
	I1026 08:32:06.482840  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.key: {Name:mka6ec3ba5bd6fb50b5978e23b03914eaace95f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:06.483033  270203 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:32:06.483075  270203 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:32:06.483082  270203 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:32:06.483103  270203 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:32:06.483127  270203 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:32:06.483155  270203 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:32:06.483194  270203 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:32:06.483778  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:32:06.502984  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:32:06.520891  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:32:06.540026  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:32:06.558469  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1026 08:32:06.576775  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:32:06.594636  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:32:06.613439  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/newest-cni-366970/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1026 08:32:06.631635  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:32:06.651886  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:32:06.670940  270203 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:32:06.688894  270203 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:32:06.701747  270203 ssh_runner.go:195] Run: openssl version
	I1026 08:32:06.709042  270203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:32:06.719492  270203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:32:06.724410  270203 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:32:06.724478  270203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:32:06.773556  270203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:32:06.784694  270203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:32:06.795414  270203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:32:06.800089  270203 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:32:06.800169  270203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:32:06.838113  270203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:32:06.848434  270203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:32:06.858026  270203 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:32:06.861970  270203 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:32:06.862031  270203 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:32:06.904996  270203 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:32:06.916483  270203 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:32:06.921168  270203 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 08:32:06.921236  270203 kubeadm.go:400] StartCluster: {Name:newest-cni-366970 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-366970 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableM
etrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:32:06.921336  270203 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:32:06.921405  270203 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:32:06.962682  270203 cri.go:89] found id: ""
	I1026 08:32:06.962750  270203 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:32:06.972025  270203 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 08:32:06.983189  270203 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 08:32:06.983243  270203 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 08:32:06.993821  270203 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 08:32:06.993845  270203 kubeadm.go:157] found existing configuration files:
	
	I1026 08:32:06.993887  270203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 08:32:07.003633  270203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 08:32:07.003690  270203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 08:32:07.012590  270203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 08:32:07.021875  270203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 08:32:07.021926  270203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 08:32:07.031595  270203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 08:32:07.040147  270203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 08:32:07.040203  270203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 08:32:07.048317  270203 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 08:32:07.055822  270203 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 08:32:07.055874  270203 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 08:32:07.064117  270203 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 08:32:07.129372  270203 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 08:32:07.205489  270203 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 08:32:07.979366  264509 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:08.378933  264509 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-866212" is "Ready"
	I1026 08:32:08.378957  264509 pod_ready.go:86] duration metric: took 399.563567ms for pod "kube-scheduler-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:08.378969  264509 pod_ready.go:40] duration metric: took 1.605105815s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:32:08.429122  264509 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:32:08.430968  264509 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-866212" cluster and "default" namespace by default
	I1026 08:32:05.506495  273227 cli_runner.go:164] Run: docker container inspect auto-110992 --format={{.State.Status}}
	I1026 08:32:05.540710  273227 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 08:32:05.540733  273227 kic_runner.go:114] Args: [docker exec --privileged auto-110992 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 08:32:05.592748  273227 cli_runner.go:164] Run: docker container inspect auto-110992 --format={{.State.Status}}
	I1026 08:32:05.617329  273227 machine.go:93] provisionDockerMachine start ...
	I1026 08:32:05.617435  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:05.640933  273227 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:05.641316  273227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1026 08:32:05.641341  273227 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:32:05.792357  273227 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-110992
	
	I1026 08:32:05.792387  273227 ubuntu.go:182] provisioning hostname "auto-110992"
	I1026 08:32:05.792447  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:05.815405  273227 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:05.815691  273227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1026 08:32:05.815713  273227 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-110992 && echo "auto-110992" | sudo tee /etc/hostname
	I1026 08:32:05.974533  273227 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-110992
	
	I1026 08:32:05.974609  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:05.996186  273227 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:05.996486  273227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1026 08:32:05.996520  273227 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-110992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-110992/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-110992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:32:06.154830  273227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:32:06.154865  273227 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:32:06.154896  273227 ubuntu.go:190] setting up certificates
	I1026 08:32:06.154907  273227 provision.go:84] configureAuth start
	I1026 08:32:06.154962  273227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-110992
	I1026 08:32:06.174548  273227 provision.go:143] copyHostCerts
	I1026 08:32:06.174604  273227 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:32:06.174614  273227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:32:06.174729  273227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:32:06.174837  273227 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:32:06.174851  273227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:32:06.174893  273227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:32:06.174970  273227 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:32:06.174981  273227 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:32:06.175016  273227 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:32:06.175095  273227 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.auto-110992 san=[127.0.0.1 192.168.76.2 auto-110992 localhost minikube]
	I1026 08:32:06.518886  273227 provision.go:177] copyRemoteCerts
	I1026 08:32:06.518947  273227 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:32:06.518984  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:06.537125  273227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa Username:docker}
	I1026 08:32:06.638429  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:32:06.659732  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1026 08:32:06.679331  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:32:06.697009  273227 provision.go:87] duration metric: took 542.084588ms to configureAuth
	I1026 08:32:06.697050  273227 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:32:06.697202  273227 config.go:182] Loaded profile config "auto-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:06.697321  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:06.716363  273227 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:06.716652  273227 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33096 <nil> <nil>}
	I1026 08:32:06.716675  273227 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:32:07.001340  273227 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:32:07.001372  273227 machine.go:96] duration metric: took 1.384012506s to provisionDockerMachine
	I1026 08:32:07.001385  273227 client.go:171] duration metric: took 6.308826562s to LocalClient.Create
	I1026 08:32:07.001401  273227 start.go:167] duration metric: took 6.308918019s to libmachine.API.Create "auto-110992"
	I1026 08:32:07.001409  273227 start.go:293] postStartSetup for "auto-110992" (driver="docker")
	I1026 08:32:07.001420  273227 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:32:07.001492  273227 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:32:07.001540  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:07.023277  273227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa Username:docker}
	I1026 08:32:07.130947  273227 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:32:07.135548  273227 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:32:07.135579  273227 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:32:07.135591  273227 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:32:07.135643  273227 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:32:07.135737  273227 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:32:07.135853  273227 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:32:07.144946  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:32:07.169512  273227 start.go:296] duration metric: took 168.087768ms for postStartSetup
	I1026 08:32:07.169910  273227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-110992
	I1026 08:32:07.192063  273227 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/config.json ...
	I1026 08:32:07.192403  273227 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:32:07.192461  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:07.213189  273227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa Username:docker}
	I1026 08:32:07.313919  273227 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:32:07.319419  273227 start.go:128] duration metric: took 6.630027588s to createHost
	I1026 08:32:07.319456  273227 start.go:83] releasing machines lock for "auto-110992", held for 6.630169587s
	I1026 08:32:07.319525  273227 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-110992
	I1026 08:32:07.340972  273227 ssh_runner.go:195] Run: cat /version.json
	I1026 08:32:07.341029  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:07.341033  273227 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:32:07.341101  273227 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-110992
	I1026 08:32:07.361920  273227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa Username:docker}
	I1026 08:32:07.362212  273227 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33096 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/auto-110992/id_rsa Username:docker}
	I1026 08:32:07.459265  273227 ssh_runner.go:195] Run: systemctl --version
	I1026 08:32:07.522135  273227 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:32:07.565331  273227 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:32:07.570371  273227 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:32:07.570444  273227 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:32:07.598494  273227 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 08:32:07.598515  273227 start.go:495] detecting cgroup driver to use...
	I1026 08:32:07.598542  273227 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:32:07.598580  273227 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:32:07.617183  273227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:32:07.630262  273227 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:32:07.630328  273227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:32:07.647531  273227 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:32:07.665358  273227 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:32:07.774067  273227 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:32:07.870404  273227 docker.go:234] disabling docker service ...
	I1026 08:32:07.870473  273227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:32:07.889397  273227 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:32:07.902564  273227 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:32:07.991696  273227 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:32:08.087478  273227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:32:08.100344  273227 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:32:08.115325  273227 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:32:08.115394  273227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.126985  273227 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:32:08.127049  273227 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.137795  273227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.148380  273227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.158236  273227 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:32:08.168389  273227 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.178343  273227 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.194388  273227 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:08.203637  273227 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:32:08.211542  273227 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:32:08.218786  273227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:32:08.299284  273227 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:32:08.419486  273227 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:32:08.419556  273227 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:32:08.423769  273227 start.go:563] Will wait 60s for crictl version
	I1026 08:32:08.423842  273227 ssh_runner.go:195] Run: which crictl
	I1026 08:32:08.428765  273227 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:32:08.462652  273227 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:32:08.462732  273227 ssh_runner.go:195] Run: crio --version
	I1026 08:32:08.500178  273227 ssh_runner.go:195] Run: crio --version
	I1026 08:32:08.533024  273227 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:32:08.534269  273227 cli_runner.go:164] Run: docker network inspect auto-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:32:08.553429  273227 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 08:32:08.557645  273227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:32:08.568980  273227 kubeadm.go:883] updating cluster {Name:auto-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:32:08.569118  273227 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:08.569177  273227 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:32:08.605862  273227 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:32:08.605889  273227 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:32:08.605946  273227 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:32:08.640485  273227 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:32:08.640507  273227 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:32:08.640514  273227 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 08:32:08.640591  273227 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-110992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:32:08.640650  273227 ssh_runner.go:195] Run: crio config
	I1026 08:32:08.690070  273227 cni.go:84] Creating CNI manager for ""
	I1026 08:32:08.690096  273227 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:32:08.690121  273227 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:32:08.690154  273227 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-110992 NodeName:auto-110992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:32:08.690345  273227 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-110992"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:32:08.690413  273227 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:32:08.699351  273227 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:32:08.699415  273227 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:32:08.707413  273227 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1026 08:32:08.721041  273227 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:32:08.736982  273227 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1026 08:32:08.750417  273227 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:32:08.754535  273227 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:32:08.765088  273227 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:32:08.866927  273227 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:32:08.897002  273227 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992 for IP: 192.168.76.2
	I1026 08:32:08.897027  273227 certs.go:195] generating shared ca certs ...
	I1026 08:32:08.897045  273227 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:08.897194  273227 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:32:08.897293  273227 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:32:08.897310  273227 certs.go:257] generating profile certs ...
	I1026 08:32:08.897390  273227 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/client.key
	I1026 08:32:08.897413  273227 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/client.crt with IP's: []
	I1026 08:32:09.353466  273227 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/client.crt ...
	I1026 08:32:09.353501  273227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/client.crt: {Name:mk34759ac344f6ad88898917f21e89214a49d6a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:09.353714  273227 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/client.key ...
	I1026 08:32:09.353734  273227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/client.key: {Name:mkedea21bb2c31b2ab0f1ace33428236d8832a7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:09.353863  273227 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/apiserver.key.f62ba51a
	I1026 08:32:09.353885  273227 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/apiserver.crt.f62ba51a with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 08:32:09.721870  273227 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/apiserver.crt.f62ba51a ...
	I1026 08:32:09.721897  273227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/apiserver.crt.f62ba51a: {Name:mk016eaff0a9684a7d35e8a8c8dd12be8f5f7a6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:09.722104  273227 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/apiserver.key.f62ba51a ...
	I1026 08:32:09.722125  273227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/apiserver.key.f62ba51a: {Name:mka32ec30aabd9a8d12da518cee660fb285099ad Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:09.722240  273227 certs.go:382] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/apiserver.crt.f62ba51a -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/apiserver.crt
	I1026 08:32:09.722361  273227 certs.go:386] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/apiserver.key.f62ba51a -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/apiserver.key
	I1026 08:32:09.722446  273227 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/proxy-client.key
	I1026 08:32:09.722463  273227 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/proxy-client.crt with IP's: []
	I1026 08:32:10.281675  273227 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/proxy-client.crt ...
	I1026 08:32:10.281704  273227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/proxy-client.crt: {Name:mk758d07a6190beecc2d05e119bb90eba9a18c9e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:10.281890  273227 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/proxy-client.key ...
	I1026 08:32:10.281905  273227 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/proxy-client.key: {Name:mk84847ec5f0990a8056ba83fbf8ee8c39106a42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:10.282114  273227 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:32:10.282159  273227 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:32:10.282172  273227 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:32:10.282200  273227 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:32:10.282234  273227 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:32:10.282279  273227 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:32:10.282330  273227 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:32:10.283155  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:32:10.306448  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:32:10.328855  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:32:10.349273  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:32:10.369664  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1026 08:32:10.393412  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 08:32:10.413567  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:32:10.433445  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/auto-110992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 08:32:10.454738  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:32:10.475993  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:32:10.493807  273227 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:32:10.512660  273227 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:32:10.527887  273227 ssh_runner.go:195] Run: openssl version
	I1026 08:32:10.534129  273227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:32:10.543918  273227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:32:10.548878  273227 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:32:10.548941  273227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:32:10.592657  273227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:32:10.604006  273227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:32:10.615194  273227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:32:10.619624  273227 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:32:10.619674  273227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:32:10.668890  273227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:32:10.678499  273227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:32:10.693763  273227 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:32:10.698330  273227 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:32:10.698408  273227 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:32:10.748569  273227 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:32:10.759767  273227 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:32:10.764155  273227 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 08:32:10.764219  273227 kubeadm.go:400] StartCluster: {Name:auto-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:32:10.764342  273227 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:32:10.764396  273227 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:32:10.795330  273227 cri.go:89] found id: ""
	I1026 08:32:10.795400  273227 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:32:10.804120  273227 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 08:32:10.812792  273227 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 08:32:10.812852  273227 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 08:32:10.820821  273227 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 08:32:10.820842  273227 kubeadm.go:157] found existing configuration files:
	
	I1026 08:32:10.820882  273227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 08:32:10.828807  273227 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 08:32:10.828866  273227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 08:32:10.836583  273227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 08:32:10.844310  273227 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 08:32:10.844377  273227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 08:32:10.851471  273227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 08:32:10.858922  273227 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 08:32:10.858971  273227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 08:32:10.865943  273227 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 08:32:10.873433  273227 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 08:32:10.873480  273227 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 08:32:10.881028  273227 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 08:32:10.925542  273227 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 08:32:10.925615  273227 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 08:32:10.951431  273227 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 08:32:10.951514  273227 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 08:32:10.951566  273227 kubeadm.go:318] OS: Linux
	I1026 08:32:10.951625  273227 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 08:32:10.951692  273227 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 08:32:10.951759  273227 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 08:32:10.951820  273227 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 08:32:10.951878  273227 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 08:32:10.951942  273227 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 08:32:10.951998  273227 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 08:32:10.952053  273227 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 08:32:11.022215  273227 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 08:32:11.022390  273227 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 08:32:11.022546  273227 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 08:32:11.031654  273227 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Oct 26 08:31:27 embed-certs-752315 crio[564]: time="2025-10-26T08:31:27.350382036Z" level=info msg="Updated default CNI network name to kindnet"
	Oct 26 08:31:27 embed-certs-752315 crio[564]: time="2025-10-26T08:31:27.55514526Z" level=info msg="Removing container: c2f733c838fe6eeb5c6bfc90137afd4de8c63e55aae945c1a408feffd4b5d1e2" id=8151b35e-549b-40db-a7be-092d0c672107 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:27 embed-certs-752315 crio[564]: time="2025-10-26T08:31:27.565191807Z" level=info msg="Removed container c2f733c838fe6eeb5c6bfc90137afd4de8c63e55aae945c1a408feffd4b5d1e2: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd/dashboard-metrics-scraper" id=8151b35e-549b-40db-a7be-092d0c672107 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.488961933Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=05c15e9c-fc93-435f-b33c-fbc72f2ec74d name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.490047681Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=0e235e0b-a87d-4b6b-a425-ab75c0ee8b77 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.49111269Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd/dashboard-metrics-scraper" id=0d78e862-c06f-4901-bc38-6f04b606081f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.491302981Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.499234604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.499933325Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.531583959Z" level=info msg="Created container 03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd/dashboard-metrics-scraper" id=0d78e862-c06f-4901-bc38-6f04b606081f name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.532233626Z" level=info msg="Starting container: 03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8" id=6a834d32-9ed2-4d21-8552-be8a8c55cb58 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.534519839Z" level=info msg="Started container" PID=1787 containerID=03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd/dashboard-metrics-scraper id=6a834d32-9ed2-4d21-8552-be8a8c55cb58 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b9102aabbcb9d206d508039487c5f06c4c1108a5f5a5888e06693689237c6e70
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.61076536Z" level=info msg="Removing container: aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767" id=951e37dc-b240-4575-8348-ea28cff4b1fc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.612072618Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=575db176-d352-4c18-8321-01a9a0faa64f name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.613592158Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2fc64f13-16ec-453f-8e56-e1c5e242025b name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.615056594Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=7aa7c2a3-d995-4ca9-9518-3f0e14781343 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.615350079Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.6239202Z" level=info msg="Removed container aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd/dashboard-metrics-scraper" id=951e37dc-b240-4575-8348-ea28cff4b1fc name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.624894524Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.625106855Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/a0f65ecc2935c91ab63350018cebaf912a405a5d6d6d8185cd86dcbe5a3b6e0a/merged/etc/passwd: no such file or directory"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.625141904Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/a0f65ecc2935c91ab63350018cebaf912a405a5d6d6d8185cd86dcbe5a3b6e0a/merged/etc/group: no such file or directory"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.625471055Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.666779101Z" level=info msg="Created container cd0e34a9885583a9a29db7cdcc3d3a07ecdcf1caeb106520ab4774f551b50196: kube-system/storage-provisioner/storage-provisioner" id=7aa7c2a3-d995-4ca9-9518-3f0e14781343 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.667589571Z" level=info msg="Starting container: cd0e34a9885583a9a29db7cdcc3d3a07ecdcf1caeb106520ab4774f551b50196" id=94f05e62-4dc3-41d2-b725-0758de04eee6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:31:47 embed-certs-752315 crio[564]: time="2025-10-26T08:31:47.669936793Z" level=info msg="Started container" PID=1797 containerID=cd0e34a9885583a9a29db7cdcc3d3a07ecdcf1caeb106520ab4774f551b50196 description=kube-system/storage-provisioner/storage-provisioner id=94f05e62-4dc3-41d2-b725-0758de04eee6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d8495a59ae7aad42c3db55b0ab731834c75f57919c3b46365467dabbee002979
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	cd0e34a988558       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   d8495a59ae7aa       storage-provisioner                          kube-system
	03fbe11ac2956       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago      Exited              dashboard-metrics-scraper   2                   b9102aabbcb9d       dashboard-metrics-scraper-6ffb444bf9-q6gjd   kubernetes-dashboard
	4b898bc10d22e       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago      Running             kubernetes-dashboard        0                   07cb4dd7b9d75       kubernetes-dashboard-855c9754f9-7m27d        kubernetes-dashboard
	bbf52faf92933       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   9cbf9a8976dae       busybox                                      default
	16f5a20811e08       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   d574c06d742c6       coredns-66bc5c9577-jktn8                     kube-system
	8fd71ca3934b0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   d8495a59ae7aa       storage-provisioner                          kube-system
	b59c79a123964       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   d76800a42f360       kindnet-m4lzl                                kube-system
	9ad903d67dde6       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   8efce21bda768       kube-proxy-5bf98                             kube-system
	b4e2a3adae3b2       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   d52c81e7d1e3a       kube-controller-manager-embed-certs-752315   kube-system
	0aaa1f21f536e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   c80152f6d78a7       kube-apiserver-embed-certs-752315            kube-system
	412f2a653f74c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   757207105ac69       kube-scheduler-embed-certs-752315            kube-system
	53cccbff24b07       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   be2a67235a074       etcd-embed-certs-752315                      kube-system
	
	
	==> coredns [16f5a20811e08c9a87436b830181d07a08d7c9c19042686b547a09d115b7077e] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40528 - 14662 "HINFO IN 5552716523772564236.1131787939876561468. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023549224s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-752315
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-752315
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=embed-certs-752315
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_30_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:30:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-752315
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:32:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:32:07 +0000   Sun, 26 Oct 2025 08:30:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:32:07 +0000   Sun, 26 Oct 2025 08:30:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:32:07 +0000   Sun, 26 Oct 2025 08:30:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:32:07 +0000   Sun, 26 Oct 2025 08:30:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-752315
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                cae690de-b1ed-4dcd-8194-03992c24069f
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-jktn8                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-embed-certs-752315                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-m4lzl                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-embed-certs-752315             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-752315    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-5bf98                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-embed-certs-752315             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-q6gjd    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7m27d         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node embed-certs-752315 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node embed-certs-752315 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     114s               kubelet          Node embed-certs-752315 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node embed-certs-752315 event: Registered Node embed-certs-752315 in Controller
	  Normal  NodeReady                97s                kubelet          Node embed-certs-752315 status is now: NodeReady
	  Normal  Starting                 59s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)  kubelet          Node embed-certs-752315 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)  kubelet          Node embed-certs-752315 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)  kubelet          Node embed-certs-752315 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node embed-certs-752315 event: Registered Node embed-certs-752315 in Controller
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [53cccbff24b074724ed929ecf8bf44f382faed357e2e31b19207adb2df85cf66] <==
	{"level":"warn","ts":"2025-10-26T08:31:15.263022Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.271137Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.278364Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54470","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.285821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54486","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.292220Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.299390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.305869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.313509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.321575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.329384Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.336208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.342623Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54638","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.348776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.354967Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.361216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.375101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.382863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.389098Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.396521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.404154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.416911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.423769Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.431132Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:15.480225Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54894","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T08:31:31.141456Z","caller":"traceutil/trace.go:172","msg":"trace[445669958] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"131.987264ms","start":"2025-10-26T08:31:31.009448Z","end":"2025-10-26T08:31:31.141435Z","steps":["trace[445669958] 'process raft request'  (duration: 65.919112ms)","trace[445669958] 'compare'  (duration: 65.947986ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:32:12 up  1:14,  0 user,  load average: 4.51, 3.51, 2.19
	Linux embed-certs-752315 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b59c79a12396440c5b834d5c3f3895abb0777e31e4f19207a302ce038fb04e94] <==
	I1026 08:31:17.019592       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:31:17.112127       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1026 08:31:17.112384       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:31:17.112424       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:31:17.112455       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:31:17Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:31:17.316674       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:31:17.316756       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:31:17.316778       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:31:17.316906       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:31:17.708260       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:31:17.708342       1 metrics.go:72] Registering metrics
	I1026 08:31:17.708437       1 controller.go:711] "Syncing nftables rules"
	I1026 08:31:27.317534       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:31:27.317584       1 main.go:301] handling current node
	I1026 08:31:37.319656       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:31:37.319709       1 main.go:301] handling current node
	I1026 08:31:47.317448       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:31:47.317485       1 main.go:301] handling current node
	I1026 08:31:57.317526       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:31:57.317554       1 main.go:301] handling current node
	I1026 08:32:07.316627       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1026 08:32:07.316678       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0aaa1f21f536e556e63c92670b92d8a3ea70dc7a114b8586e7c128c24f8010e2] <==
	I1026 08:31:15.956161       1 aggregator.go:171] initial CRD sync complete...
	I1026 08:31:15.955777       1 policy_source.go:240] refreshing policies
	I1026 08:31:15.957758       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1026 08:31:15.958265       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1026 08:31:15.956409       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 08:31:15.956382       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1026 08:31:15.959681       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 08:31:15.959762       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 08:31:15.959803       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:31:15.958887       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 08:31:15.960084       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	E1026 08:31:15.963391       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 08:31:15.968321       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 08:31:15.991037       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:31:16.205542       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 08:31:16.234337       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:31:16.259523       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:31:16.265857       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:31:16.272978       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:31:16.323884       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.101.105.177"}
	I1026 08:31:16.334499       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.96.247.66"}
	I1026 08:31:16.862068       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:31:19.336016       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:31:19.437163       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:31:19.686483       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [b4e2a3adae3b260f24bc34d1fbff56bfc90e781b00b3ef7ade7ad5a02580d3d2] <==
	I1026 08:31:19.265524       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 08:31:19.282262       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 08:31:19.282280       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 08:31:19.282308       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1026 08:31:19.283306       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 08:31:19.283346       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 08:31:19.283354       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 08:31:19.283481       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 08:31:19.283573       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-752315"
	I1026 08:31:19.283620       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1026 08:31:19.284020       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 08:31:19.285776       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 08:31:19.285805       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 08:31:19.285820       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 08:31:19.285860       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 08:31:19.285907       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1026 08:31:19.286947       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 08:31:19.288046       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 08:31:19.288099       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:31:19.303338       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:31:19.309712       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:31:19.309731       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 08:31:19.309740       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 08:31:19.311875       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 08:31:19.322695       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [9ad903d67dde66294e4479668d0c5b6cf2ee2a72713eb621ec1ffceff453c1d3] <==
	I1026 08:31:16.901479       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:31:16.968924       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:31:17.069158       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:31:17.069201       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1026 08:31:17.069359       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:31:17.090832       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:31:17.090888       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:31:17.095776       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:31:17.096063       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:31:17.096088       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:31:17.097329       1 config.go:200] "Starting service config controller"
	I1026 08:31:17.097352       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:31:17.097386       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:31:17.097392       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:31:17.097410       1 config.go:309] "Starting node config controller"
	I1026 08:31:17.097419       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:31:17.097426       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:31:17.097427       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:31:17.097440       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:31:17.198460       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:31:17.198577       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:31:17.198587       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [412f2a653f74cbf8314bc01c58e251aad5fd401f7370feb8ab90dacb1abcda0a] <==
	I1026 08:31:15.327390       1 serving.go:386] Generated self-signed cert in-memory
	I1026 08:31:16.235503       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 08:31:16.235534       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:31:16.240177       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1026 08:31:16.240190       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:31:16.240216       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1026 08:31:16.240217       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:31:16.240210       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 08:31:16.240316       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1026 08:31:16.240751       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 08:31:16.240788       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:31:16.341025       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:31:16.341037       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1026 08:31:16.341173       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:31:20 embed-certs-752315 kubelet[721]: I1026 08:31:20.025300     721 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4gnv\" (UniqueName: \"kubernetes.io/projected/c2ba33f0-784d-4cd9-9324-324155d48377-kube-api-access-c4gnv\") pod \"kubernetes-dashboard-855c9754f9-7m27d\" (UID: \"c2ba33f0-784d-4cd9-9324-324155d48377\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7m27d"
	Oct 26 08:31:23 embed-certs-752315 kubelet[721]: I1026 08:31:23.637088     721 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 08:31:25 embed-certs-752315 kubelet[721]: I1026 08:31:25.313689     721 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7m27d" podStartSLOduration=3.398766922 podStartE2EDuration="6.313663123s" podCreationTimestamp="2025-10-26 08:31:19 +0000 UTC" firstStartedPulling="2025-10-26 08:31:20.238666613 +0000 UTC m=+6.853087427" lastFinishedPulling="2025-10-26 08:31:23.153562819 +0000 UTC m=+9.767983628" observedRunningTime="2025-10-26 08:31:23.556481279 +0000 UTC m=+10.170902094" watchObservedRunningTime="2025-10-26 08:31:25.313663123 +0000 UTC m=+11.928083937"
	Oct 26 08:31:26 embed-certs-752315 kubelet[721]: I1026 08:31:26.549381     721 scope.go:117] "RemoveContainer" containerID="c2f733c838fe6eeb5c6bfc90137afd4de8c63e55aae945c1a408feffd4b5d1e2"
	Oct 26 08:31:27 embed-certs-752315 kubelet[721]: I1026 08:31:27.553699     721 scope.go:117] "RemoveContainer" containerID="c2f733c838fe6eeb5c6bfc90137afd4de8c63e55aae945c1a408feffd4b5d1e2"
	Oct 26 08:31:27 embed-certs-752315 kubelet[721]: I1026 08:31:27.553836     721 scope.go:117] "RemoveContainer" containerID="aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767"
	Oct 26 08:31:27 embed-certs-752315 kubelet[721]: E1026 08:31:27.554076     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6gjd_kubernetes-dashboard(1f5ff53f-3467-4f0a-9e64-63941f09bdfa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd" podUID="1f5ff53f-3467-4f0a-9e64-63941f09bdfa"
	Oct 26 08:31:28 embed-certs-752315 kubelet[721]: I1026 08:31:28.558708     721 scope.go:117] "RemoveContainer" containerID="aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767"
	Oct 26 08:31:28 embed-certs-752315 kubelet[721]: E1026 08:31:28.558877     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6gjd_kubernetes-dashboard(1f5ff53f-3467-4f0a-9e64-63941f09bdfa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd" podUID="1f5ff53f-3467-4f0a-9e64-63941f09bdfa"
	Oct 26 08:31:31 embed-certs-752315 kubelet[721]: I1026 08:31:31.966167     721 scope.go:117] "RemoveContainer" containerID="aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767"
	Oct 26 08:31:31 embed-certs-752315 kubelet[721]: E1026 08:31:31.966386     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6gjd_kubernetes-dashboard(1f5ff53f-3467-4f0a-9e64-63941f09bdfa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd" podUID="1f5ff53f-3467-4f0a-9e64-63941f09bdfa"
	Oct 26 08:31:47 embed-certs-752315 kubelet[721]: I1026 08:31:47.488435     721 scope.go:117] "RemoveContainer" containerID="aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767"
	Oct 26 08:31:47 embed-certs-752315 kubelet[721]: I1026 08:31:47.609374     721 scope.go:117] "RemoveContainer" containerID="aad87e0e5c2d9efeaedbb2719e27f4790f29a079704dc1620b4f829080c2e767"
	Oct 26 08:31:47 embed-certs-752315 kubelet[721]: I1026 08:31:47.609616     721 scope.go:117] "RemoveContainer" containerID="03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8"
	Oct 26 08:31:47 embed-certs-752315 kubelet[721]: E1026 08:31:47.609815     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6gjd_kubernetes-dashboard(1f5ff53f-3467-4f0a-9e64-63941f09bdfa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd" podUID="1f5ff53f-3467-4f0a-9e64-63941f09bdfa"
	Oct 26 08:31:47 embed-certs-752315 kubelet[721]: I1026 08:31:47.611174     721 scope.go:117] "RemoveContainer" containerID="8fd71ca3934b0c337a8942ef6b2577f1a2eb884b4dd3e8c1621585332293a357"
	Oct 26 08:31:51 embed-certs-752315 kubelet[721]: I1026 08:31:51.966836     721 scope.go:117] "RemoveContainer" containerID="03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8"
	Oct 26 08:31:51 embed-certs-752315 kubelet[721]: E1026 08:31:51.966984     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6gjd_kubernetes-dashboard(1f5ff53f-3467-4f0a-9e64-63941f09bdfa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd" podUID="1f5ff53f-3467-4f0a-9e64-63941f09bdfa"
	Oct 26 08:32:03 embed-certs-752315 kubelet[721]: I1026 08:32:03.488725     721 scope.go:117] "RemoveContainer" containerID="03fbe11ac295690c2200822367d90ffc871b7203f060a5f4c95221e7bf0038c8"
	Oct 26 08:32:03 embed-certs-752315 kubelet[721]: E1026 08:32:03.488959     721 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-q6gjd_kubernetes-dashboard(1f5ff53f-3467-4f0a-9e64-63941f09bdfa)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-q6gjd" podUID="1f5ff53f-3467-4f0a-9e64-63941f09bdfa"
	Oct 26 08:32:07 embed-certs-752315 kubelet[721]: I1026 08:32:07.680126     721 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Oct 26 08:32:07 embed-certs-752315 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 08:32:07 embed-certs-752315 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 08:32:07 embed-certs-752315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 08:32:07 embed-certs-752315 systemd[1]: kubelet.service: Consumed 1.734s CPU time.
	
	
	==> kubernetes-dashboard [4b898bc10d22ebec112eb26c1c60033644c1c9521519a40efded7e7d0fb11a33] <==
	2025/10/26 08:31:23 Using namespace: kubernetes-dashboard
	2025/10/26 08:31:23 Using in-cluster config to connect to apiserver
	2025/10/26 08:31:23 Using secret token for csrf signing
	2025/10/26 08:31:23 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 08:31:23 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 08:31:23 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 08:31:23 Generating JWE encryption key
	2025/10/26 08:31:23 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 08:31:23 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 08:31:23 Initializing JWE encryption key from synchronized object
	2025/10/26 08:31:23 Creating in-cluster Sidecar client
	2025/10/26 08:31:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:31:23 Serving insecurely on HTTP port: 9090
	2025/10/26 08:31:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:31:23 Starting overwatch
	
	
	==> storage-provisioner [8fd71ca3934b0c337a8942ef6b2577f1a2eb884b4dd3e8c1621585332293a357] <==
	I1026 08:31:16.869507       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 08:31:46.873637       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [cd0e34a9885583a9a29db7cdcc3d3a07ecdcf1caeb106520ab4774f551b50196] <==
	I1026 08:31:47.684311       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 08:31:47.693805       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 08:31:47.693909       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 08:31:47.697189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:51.152891       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:55.413876       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:31:59.012984       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:02.067414       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:05.089619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:05.096047       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:32:05.096164       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 08:32:05.096336       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-752315_6df4a833-2076-405d-9339-0c93df2fad95!
	I1026 08:32:05.096306       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cf86141f-07c1-4e09-9431-3b0349d6fa2c", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-752315_6df4a833-2076-405d-9339-0c93df2fad95 became leader
	W1026 08:32:05.098478       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:05.101836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:32:05.196616       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-752315_6df4a833-2076-405d-9339-0c93df2fad95!
	W1026 08:32:07.104828       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:07.110198       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:09.114281       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:09.118389       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:11.121403       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:11.126894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-752315 -n embed-certs-752315
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-752315 -n embed-certs-752315: exit status 2 (378.63743ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-752315 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.65s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-866212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-866212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (330.080454ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:32:17Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-866212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-866212 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-866212 describe deploy/metrics-server -n kube-system: exit status 1 (90.900816ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-866212 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-866212
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-866212:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed",
	        "Created": "2025-10-26T08:31:33.082391712Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 265438,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:31:33.137663719Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed/hostname",
	        "HostsPath": "/var/lib/docker/containers/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed/hosts",
	        "LogPath": "/var/lib/docker/containers/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed-json.log",
	        "Name": "/default-k8s-diff-port-866212",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-866212:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-866212",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed",
	                "LowerDir": "/var/lib/docker/overlay2/3ad3b1c0441a6dfe7d983bd846075d170734c32b25f3dbb10f22d7149ddb85fe-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ad3b1c0441a6dfe7d983bd846075d170734c32b25f3dbb10f22d7149ddb85fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ad3b1c0441a6dfe7d983bd846075d170734c32b25f3dbb10f22d7149ddb85fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ad3b1c0441a6dfe7d983bd846075d170734c32b25f3dbb10f22d7149ddb85fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-866212",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-866212/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-866212",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-866212",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-866212",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "07bd5ef896bbc75d6a87b31334809c1864c96fcdec7cd6b2cf5e882a68159714",
	            "SandboxKey": "/var/run/docker/netns/07bd5ef896bb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33083"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33085"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33087"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-866212": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:43:a2:a3:13:62",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6895eb84e54294e7e4b0c2ef3aabe968c7a2cc155d3fbec01d47d6ad909fa85",
	                    "EndpointID": "e460a433f1e80f056db42df8a69623a2d11a5597741f7d2898806653c3dc95ac",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-866212",
	                        "9325d9bcbadd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-866212 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-866212 logs -n 25: (1.258537878s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-001983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:30 UTC │
	│ start   │ -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:30 UTC │ 26 Oct 25 08:31 UTC │
	│ addons  │ enable dashboard -p embed-certs-752315 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ image   │ old-k8s-version-810379 image list --format=json                                                                                                                                                                                               │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p old-k8s-version-810379 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p old-k8s-version-810379                                                                                                                                                                                                                     │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ delete  │ -p old-k8s-version-810379                                                                                                                                                                                                                     │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ delete  │ -p disable-driver-mounts-209240                                                                                                                                                                                                               │ disable-driver-mounts-209240 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p default-k8s-diff-port-866212 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ image   │ no-preload-001983 image list --format=json                                                                                                                                                                                                    │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p no-preload-001983 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-462840                                                                                                                                                                                                                  │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p no-preload-001983                                                                                                                                                                                                                          │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p no-preload-001983                                                                                                                                                                                                                          │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p auto-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ image   │ embed-certs-752315 image list --format=json                                                                                                                                                                                                   │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ pause   │ -p embed-certs-752315 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ delete  │ -p embed-certs-752315                                                                                                                                                                                                                         │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p embed-certs-752315                                                                                                                                                                                                                         │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p kindnet-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-110992               │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-866212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:32:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:32:16.612087  278592 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:32:16.614229  278592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:16.614266  278592 out.go:374] Setting ErrFile to fd 2...
	I1026 08:32:16.614282  278592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:16.614759  278592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:32:16.615869  278592 out.go:368] Setting JSON to false
	I1026 08:32:16.617939  278592 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4488,"bootTime":1761463049,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:32:16.618128  278592 start.go:141] virtualization: kvm guest
	I1026 08:32:16.620455  278592 out.go:179] * [kindnet-110992] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:32:16.621967  278592 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:32:16.621971  278592 notify.go:220] Checking for updates...
	I1026 08:32:16.628981  278592 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:32:16.630623  278592 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:32:16.632487  278592 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:32:16.633930  278592 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:32:16.636777  278592 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:32:16.638874  278592 config.go:182] Loaded profile config "auto-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:16.639045  278592 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:16.639211  278592 config.go:182] Loaded profile config "newest-cni-366970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:16.639349  278592 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:32:16.674844  278592 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:32:16.675000  278592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:16.759200  278592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 08:32:16.743141488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:16.759390  278592 docker.go:318] overlay module found
	I1026 08:32:16.762482  278592 out.go:179] * Using the docker driver based on user configuration
	I1026 08:32:16.763818  278592 start.go:305] selected driver: docker
	I1026 08:32:16.763835  278592 start.go:925] validating driver "docker" against <nil>
	I1026 08:32:16.763847  278592 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:32:16.764484  278592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:16.854338  278592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 08:32:16.84246589 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:16.854656  278592 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:32:16.855159  278592 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:32:16.857180  278592 out.go:179] * Using Docker driver with root privileges
	I1026 08:32:16.858510  278592 cni.go:84] Creating CNI manager for "kindnet"
	I1026 08:32:16.858557  278592 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 08:32:16.858657  278592 start.go:349] cluster config:
	{Name:kindnet-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:32:16.860071  278592 out.go:179] * Starting "kindnet-110992" primary control-plane node in "kindnet-110992" cluster
	I1026 08:32:16.861301  278592 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:32:16.862891  278592 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:32:16.864220  278592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:16.864298  278592 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:32:16.864352  278592 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:32:16.864367  278592 cache.go:58] Caching tarball of preloaded images
	I1026 08:32:16.864538  278592 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:32:16.864552  278592 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:32:16.864744  278592 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kindnet-110992/config.json ...
	I1026 08:32:16.864782  278592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kindnet-110992/config.json: {Name:mk6340d63c3702426afb691e46b9d2a183a30ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:16.890809  278592 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:32:16.890862  278592 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:32:16.890882  278592 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:32:16.890916  278592 start.go:360] acquireMachinesLock for kindnet-110992: {Name:mk0ffaf9908881b6a0934c083112125f38970f56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:32:16.891026  278592 start.go:364] duration metric: took 89.576µs to acquireMachinesLock for "kindnet-110992"
	I1026 08:32:16.891053  278592 start.go:93] Provisioning new machine with config: &{Name:kindnet-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-110992 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:32:16.891143  278592 start.go:125] createHost starting for "" (driver="docker")
	I1026 08:32:17.190887  270203 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 08:32:17.190988  270203 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 08:32:17.191182  270203 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 08:32:17.191323  270203 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 08:32:17.191375  270203 kubeadm.go:318] OS: Linux
	I1026 08:32:17.191466  270203 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 08:32:17.191568  270203 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 08:32:17.191711  270203 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 08:32:17.191804  270203 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 08:32:17.191879  270203 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 08:32:17.191971  270203 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 08:32:17.192060  270203 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 08:32:17.192121  270203 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 08:32:17.192198  270203 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 08:32:17.192334  270203 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 08:32:17.192447  270203 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 08:32:17.192520  270203 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 08:32:17.194894  270203 out.go:252]   - Generating certificates and keys ...
	I1026 08:32:17.194993  270203 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 08:32:17.195085  270203 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 08:32:17.195168  270203 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 08:32:17.195244  270203 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 08:32:17.195336  270203 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 08:32:17.195397  270203 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 08:32:17.195468  270203 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 08:32:17.195625  270203 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-366970] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 08:32:17.195692  270203 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 08:32:17.195845  270203 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-366970] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 08:32:17.195959  270203 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 08:32:17.196101  270203 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 08:32:17.196196  270203 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 08:32:17.196406  270203 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 08:32:17.196499  270203 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 08:32:17.196603  270203 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 08:32:17.196723  270203 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 08:32:17.196886  270203 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 08:32:17.196997  270203 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 08:32:17.197105  270203 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 08:32:17.197201  270203 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 08:32:17.199181  270203 out.go:252]   - Booting up control plane ...
	I1026 08:32:17.199317  270203 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 08:32:17.199421  270203 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 08:32:17.199504  270203 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 08:32:17.199635  270203 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 08:32:17.199784  270203 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 08:32:17.199926  270203 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 08:32:17.199999  270203 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 08:32:17.200032  270203 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 08:32:17.200136  270203 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 08:32:17.200219  270203 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 08:32:17.200289  270203 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00088434s
	I1026 08:32:17.200406  270203 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 08:32:17.200514  270203 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1026 08:32:17.200640  270203 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 08:32:17.200750  270203 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 08:32:17.200882  270203 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.035122888s
	I1026 08:32:17.200968  270203 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.255250898s
	I1026 08:32:17.201061  270203 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001973335s
	I1026 08:32:17.201215  270203 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:32:17.201386  270203 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:32:17.201442  270203 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:32:17.201654  270203 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-366970 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:32:17.201761  270203 kubeadm.go:318] [bootstrap-token] Using token: h3axcx.evqrjgvunvsgjdn2
	I1026 08:32:17.208590  270203 out.go:252]   - Configuring RBAC rules ...
	I1026 08:32:17.208734  270203 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 08:32:17.208839  270203 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 08:32:17.209014  270203 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 08:32:17.209173  270203 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 08:32:17.209437  270203 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 08:32:17.209568  270203 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 08:32:17.209702  270203 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 08:32:17.209756  270203 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 08:32:17.209813  270203 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 08:32:17.209819  270203 kubeadm.go:318] 
	I1026 08:32:17.209895  270203 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 08:32:17.209903  270203 kubeadm.go:318] 
	I1026 08:32:17.210011  270203 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 08:32:17.210036  270203 kubeadm.go:318] 
	I1026 08:32:17.210069  270203 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 08:32:17.210149  270203 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 08:32:17.210241  270203 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 08:32:17.210292  270203 kubeadm.go:318] 
	I1026 08:32:17.210369  270203 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 08:32:17.210379  270203 kubeadm.go:318] 
	I1026 08:32:17.210452  270203 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 08:32:17.210465  270203 kubeadm.go:318] 
	I1026 08:32:17.210540  270203 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 08:32:17.210645  270203 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 08:32:17.210723  270203 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 08:32:17.210728  270203 kubeadm.go:318] 
	I1026 08:32:17.210822  270203 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 08:32:17.210912  270203 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 08:32:17.210918  270203 kubeadm.go:318] 
	I1026 08:32:17.211021  270203 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token h3axcx.evqrjgvunvsgjdn2 \
	I1026 08:32:17.211156  270203 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d \
	I1026 08:32:17.211184  270203 kubeadm.go:318] 	--control-plane 
	I1026 08:32:17.211189  270203 kubeadm.go:318] 
	I1026 08:32:17.211301  270203 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 08:32:17.211308  270203 kubeadm.go:318] 
	I1026 08:32:17.211404  270203 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token h3axcx.evqrjgvunvsgjdn2 \
	I1026 08:32:17.211510  270203 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d 
	I1026 08:32:17.211521  270203 cni.go:84] Creating CNI manager for ""
	I1026 08:32:17.211529  270203 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:32:17.213403  270203 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Oct 26 08:32:06 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:06.006682129Z" level=info msg="Starting container: 4a2f335741e8e48139fed2405803f57679937fc41f6daa80092896aa37a35416" id=b0f6e0db-0278-4fd3-b138-c38c4067f5a2 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:32:06 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:06.009092407Z" level=info msg="Started container" PID=1848 containerID=4a2f335741e8e48139fed2405803f57679937fc41f6daa80092896aa37a35416 description=kube-system/coredns-66bc5c9577-h4dk5/coredns id=b0f6e0db-0278-4fd3-b138-c38c4067f5a2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=21f7a54b8e8204339a8421ff06bf86259520dc0c3614e5ad81d8c1b7850dd058
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.932161261Z" level=info msg="Running pod sandbox: default/busybox/POD" id=70fbdb1e-aaa8-4d96-8cde-e01caf1ac9ee name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.932242711Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.937450101Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f9bdff5dd37b815308e8602814e10b00a04bf56c86bcd7fecb5e5868dc979cea UID:b5bb1b9e-b768-4cd4-94f6-e17e789dd4c0 NetNS:/var/run/netns/57ec5698-59f5-4176-b585-5368873d463a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005b8510}] Aliases:map[]}"
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.937486384Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.949114868Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f9bdff5dd37b815308e8602814e10b00a04bf56c86bcd7fecb5e5868dc979cea UID:b5bb1b9e-b768-4cd4-94f6-e17e789dd4c0 NetNS:/var/run/netns/57ec5698-59f5-4176-b585-5368873d463a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005b8510}] Aliases:map[]}"
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.949242758Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.951594272Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.953478827Z" level=info msg="Ran pod sandbox f9bdff5dd37b815308e8602814e10b00a04bf56c86bcd7fecb5e5868dc979cea with infra container: default/busybox/POD" id=70fbdb1e-aaa8-4d96-8cde-e01caf1ac9ee name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.954964888Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=33e1e1de-40da-4e16-a0f0-14188e8b0673 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.955108125Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=33e1e1de-40da-4e16-a0f0-14188e8b0673 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.955153565Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=33e1e1de-40da-4e16-a0f0-14188e8b0673 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.956018485Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9320c166-0548-4ae9-8ca4-8a46f59c5860 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:32:08 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:08.959825624Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Oct 26 08:32:10 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:10.360296578Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=9320c166-0548-4ae9-8ca4-8a46f59c5860 name=/runtime.v1.ImageService/PullImage
	Oct 26 08:32:10 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:10.361348413Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=51a4d48d-7db2-4747-810a-66cadc868e8a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:10 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:10.362887401Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6087f4b9-f62f-4aff-aecb-101aba4c859d name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:10 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:10.366678041Z" level=info msg="Creating container: default/busybox/busybox" id=e63eba0c-7c34-454c-9bfe-6b6cbb375fc0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:10 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:10.366824358Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:10 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:10.371318933Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:10 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:10.371822072Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:10 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:10.403641657Z" level=info msg="Created container ecb321bda4997ad9a1ab070bae177cc5053baf8e62b8049e9561b814b6b39352: default/busybox/busybox" id=e63eba0c-7c34-454c-9bfe-6b6cbb375fc0 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:10 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:10.404709786Z" level=info msg="Starting container: ecb321bda4997ad9a1ab070bae177cc5053baf8e62b8049e9561b814b6b39352" id=23f50564-e6e7-422b-843e-88ac79aaeeb5 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:32:10 default-k8s-diff-port-866212 crio[778]: time="2025-10-26T08:32:10.406747028Z" level=info msg="Started container" PID=1918 containerID=ecb321bda4997ad9a1ab070bae177cc5053baf8e62b8049e9561b814b6b39352 description=default/busybox/busybox id=23f50564-e6e7-422b-843e-88ac79aaeeb5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f9bdff5dd37b815308e8602814e10b00a04bf56c86bcd7fecb5e5868dc979cea
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	ecb321bda4997       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   f9bdff5dd37b8       busybox                                                default
	4a2f335741e8e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   21f7a54b8e820       coredns-66bc5c9577-h4dk5                               kube-system
	e187ccdf56966       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   474d06b5264f3       storage-provisioner                                    kube-system
	aec86151b5f07       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   847ec118fc6fb       kube-proxy-m4gfc                                       kube-system
	97fb3ae3ed199       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   71e38c098a8cb       kindnet-vr7fg                                          kube-system
	d75627e323f98       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   32f6c44b2125b       kube-apiserver-default-k8s-diff-port-866212            kube-system
	2fd7fbf704d50       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   648f1caac8e6f       etcd-default-k8s-diff-port-866212                      kube-system
	cb612b39a9b4e       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   c3a8d0def6f59       kube-controller-manager-default-k8s-diff-port-866212   kube-system
	779730f146edb       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   563e84e13e0c5       kube-scheduler-default-k8s-diff-port-866212            kube-system
	
	
	==> coredns [4a2f335741e8e48139fed2405803f57679937fc41f6daa80092896aa37a35416] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40041 - 57154 "HINFO IN 8966910692376431876.8933786528597962661. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.0313141s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-866212
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-866212
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=default-k8s-diff-port-866212
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_31_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:31:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-866212
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:32:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:32:05 +0000   Sun, 26 Oct 2025 08:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:32:05 +0000   Sun, 26 Oct 2025 08:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:32:05 +0000   Sun, 26 Oct 2025 08:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:32:05 +0000   Sun, 26 Oct 2025 08:32:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-866212
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                35b0b8af-89ca-40c6-acd5-1ad4f6cfade6
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-h4dk5                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-866212                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-vr7fg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-866212             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-866212    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-m4gfc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-866212             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 23s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s   kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s   kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s   kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-866212 event: Registered Node default-k8s-diff-port-866212 in Controller
	  Normal  NodeReady                13s   kubelet          Node default-k8s-diff-port-866212 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [2fd7fbf704d505cbe83e8f11a687e63b5590a3cacca3a3880ce20ac4b7a2fde4] <==
	{"level":"warn","ts":"2025-10-26T08:31:45.595681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.603617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50004","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.611005Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.621924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.630148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.639341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.646656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50110","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.654376Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.660999Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.667953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.674595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.680791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.687511Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.708152Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.715016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.721537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:45.772886Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:31:57.384802Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.058683ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-866212\" limit:1 ","response":"range_response_count:1 size:5662"}
	{"level":"info","ts":"2025-10-26T08:31:57.384937Z","caller":"traceutil/trace.go:172","msg":"trace[1160346900] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-866212; range_end:; response_count:1; response_revision:381; }","duration":"100.211455ms","start":"2025-10-26T08:31:57.284707Z","end":"2025-10-26T08:31:57.384918Z","steps":["trace[1160346900] 'agreement among raft nodes before linearized reading'  (duration: 99.942563ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:31:57.384868Z","caller":"traceutil/trace.go:172","msg":"trace[637402175] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"106.433297ms","start":"2025-10-26T08:31:57.278411Z","end":"2025-10-26T08:31:57.384844Z","steps":["trace[637402175] 'process raft request'  (duration: 106.256495ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T08:31:58.138551Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.288824ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765741968041139 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.94.2\" mod_revision:203 > success:<request_put:<key:\"/registry/masterleases/192.168.94.2\" value_size:65 lease:6571765741968041137 >> failure:<request_range:<key:\"/registry/masterleases/192.168.94.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T08:31:58.138640Z","caller":"traceutil/trace.go:172","msg":"trace[648880386] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"251.024229ms","start":"2025-10-26T08:31:57.887605Z","end":"2025-10-26T08:31:58.138629Z","steps":["trace[648880386] 'process raft request'  (duration: 125.19295ms)","trace[648880386] 'compare'  (duration: 125.133896ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:31:58.939508Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"155.012916ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-866212\" limit:1 ","response":"range_response_count:1 size:5662"}
	{"level":"info","ts":"2025-10-26T08:31:58.939561Z","caller":"traceutil/trace.go:172","msg":"trace[614937093] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-866212; range_end:; response_count:1; response_revision:385; }","duration":"155.084633ms","start":"2025-10-26T08:31:58.784464Z","end":"2025-10-26T08:31:58.939549Z","steps":["trace[614937093] 'range keys from in-memory index tree'  (duration: 154.862257ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:03.200341Z","caller":"traceutil/trace.go:172","msg":"trace[409906351] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"134.878957ms","start":"2025-10-26T08:32:03.065434Z","end":"2025-10-26T08:32:03.200313Z","steps":["trace[409906351] 'process raft request'  (duration: 124.582587ms)","trace[409906351] 'compare'  (duration: 10.095137ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:32:18 up  1:14,  0 user,  load average: 4.39, 3.50, 2.20
	Linux default-k8s-diff-port-866212 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [97fb3ae3ed199e0f781a9a427fe5fb83b91a5573f6ef294f6eff53fcc2cb0224] <==
	I1026 08:31:54.979716       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:31:54.980042       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1026 08:31:54.980231       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:31:54.980322       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:31:54.980368       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:31:55Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:31:55.181980       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:31:55.182010       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:31:55.182023       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:31:55.182163       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:31:55.682862       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:31:55.682899       1 metrics.go:72] Registering metrics
	I1026 08:31:55.682983       1 controller.go:711] "Syncing nftables rules"
	I1026 08:32:05.182374       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:32:05.182431       1 main.go:301] handling current node
	I1026 08:32:15.181694       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:32:15.181753       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d75627e323f9802d4d1d8eeb4512ff42524fab3c590207adc407e87fcc82dc83] <==
	I1026 08:31:46.258098       1 aggregator.go:171] initial CRD sync complete...
	I1026 08:31:46.258112       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 08:31:46.258118       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 08:31:46.258124       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:31:46.260205       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:31:46.260586       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:31:46.448199       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:31:47.153705       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 08:31:47.157950       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 08:31:47.157968       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:31:47.736797       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:31:47.785537       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:31:47.859698       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 08:31:47.865900       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1026 08:31:47.867065       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:31:47.872077       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:31:48.197196       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:31:48.810822       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:31:48.822694       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 08:31:48.834586       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 08:31:54.105493       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:31:54.152900       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:31:54.159841       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:31:54.303487       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1026 08:32:16.750645       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8444->192.168.94.1:34880: use of closed network connection
	
	
	==> kube-controller-manager [cb612b39a9b4e4261df638f4ac309cf3093b1165454e0f8edb08fb544835c4d6] <==
	I1026 08:31:53.197631       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 08:31:53.197699       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:31:53.197697       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 08:31:53.198113       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 08:31:53.200128       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 08:31:53.201317       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 08:31:53.201345       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 08:31:53.202756       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 08:31:53.204157       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:31:53.210332       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 08:31:53.218669       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 08:31:53.236173       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:31:53.245653       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 08:31:53.245784       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 08:31:53.245905       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-866212"
	I1026 08:31:53.245970       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 08:31:53.247101       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1026 08:31:53.247242       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 08:31:53.247352       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 08:31:53.247352       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:31:53.247427       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 08:31:53.247436       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 08:31:53.247855       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 08:31:53.249048       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 08:32:08.247894       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [aec86151b5f074cc9125d76405b42982cbcde96765aac9734e1d3bc508d8706b] <==
	I1026 08:31:54.743772       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:31:54.814401       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:31:54.915235       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:31:54.915294       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1026 08:31:54.915392       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:31:54.936328       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:31:54.936386       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:31:54.943010       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:31:54.943705       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:31:54.943753       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:31:54.945800       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:31:54.945849       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:31:54.945891       1 config.go:200] "Starting service config controller"
	I1026 08:31:54.945898       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:31:54.945942       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:31:54.945960       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:31:54.946059       1 config.go:309] "Starting node config controller"
	I1026 08:31:54.946081       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:31:55.046051       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:31:55.046065       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 08:31:55.046070       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:31:55.046350       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [779730f146edbedf631e8a42b5609df642b1831555d3ae82ec053f8d7f3338a0] <==
	E1026 08:31:46.212349       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:31:46.212410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:31:46.212420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:31:46.212524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:31:46.212844       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:31:46.213140       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:31:46.213360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:31:46.213390       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:31:46.213411       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 08:31:47.025964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 08:31:47.057388       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1026 08:31:47.087764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:31:47.143558       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 08:31:47.163950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 08:31:47.191441       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1026 08:31:47.217545       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:31:47.226702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:31:47.236089       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:31:47.385549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 08:31:47.421215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:31:47.461412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 08:31:47.488685       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1026 08:31:47.494077       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 08:31:47.519546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1026 08:31:49.607807       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:31:49 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:49.713814    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-866212" podStartSLOduration=1.713788567 podStartE2EDuration="1.713788567s" podCreationTimestamp="2025-10-26 08:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:31:49.702992014 +0000 UTC m=+1.151108555" watchObservedRunningTime="2025-10-26 08:31:49.713788567 +0000 UTC m=+1.161905109"
	Oct 26 08:31:49 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:49.724377    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-866212" podStartSLOduration=1.724355814 podStartE2EDuration="1.724355814s" podCreationTimestamp="2025-10-26 08:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:31:49.714310884 +0000 UTC m=+1.162427426" watchObservedRunningTime="2025-10-26 08:31:49.724355814 +0000 UTC m=+1.172472359"
	Oct 26 08:31:49 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:49.736604    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-866212" podStartSLOduration=1.736577966 podStartE2EDuration="1.736577966s" podCreationTimestamp="2025-10-26 08:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:31:49.724823004 +0000 UTC m=+1.172939556" watchObservedRunningTime="2025-10-26 08:31:49.736577966 +0000 UTC m=+1.184694509"
	Oct 26 08:31:49 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:49.750495    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-866212" podStartSLOduration=1.750474962 podStartE2EDuration="1.750474962s" podCreationTimestamp="2025-10-26 08:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:31:49.736762953 +0000 UTC m=+1.184879495" watchObservedRunningTime="2025-10-26 08:31:49.750474962 +0000 UTC m=+1.198591486"
	Oct 26 08:31:53 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:53.193643    1329 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Oct 26 08:31:53 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:53.194429    1329 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Oct 26 08:31:54 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:54.379848    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/029bb2f9-cc20-4deb-8eca-da1405fd2c84-kube-proxy\") pod \"kube-proxy-m4gfc\" (UID: \"029bb2f9-cc20-4deb-8eca-da1405fd2c84\") " pod="kube-system/kube-proxy-m4gfc"
	Oct 26 08:31:54 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:54.379976    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sfx8\" (UniqueName: \"kubernetes.io/projected/c665249b-007a-4348-8905-c4ba71426d5c-kube-api-access-7sfx8\") pod \"kindnet-vr7fg\" (UID: \"c665249b-007a-4348-8905-c4ba71426d5c\") " pod="kube-system/kindnet-vr7fg"
	Oct 26 08:31:54 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:54.380013    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/029bb2f9-cc20-4deb-8eca-da1405fd2c84-xtables-lock\") pod \"kube-proxy-m4gfc\" (UID: \"029bb2f9-cc20-4deb-8eca-da1405fd2c84\") " pod="kube-system/kube-proxy-m4gfc"
	Oct 26 08:31:54 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:54.380072    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/029bb2f9-cc20-4deb-8eca-da1405fd2c84-lib-modules\") pod \"kube-proxy-m4gfc\" (UID: \"029bb2f9-cc20-4deb-8eca-da1405fd2c84\") " pod="kube-system/kube-proxy-m4gfc"
	Oct 26 08:31:54 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:54.380124    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/c665249b-007a-4348-8905-c4ba71426d5c-cni-cfg\") pod \"kindnet-vr7fg\" (UID: \"c665249b-007a-4348-8905-c4ba71426d5c\") " pod="kube-system/kindnet-vr7fg"
	Oct 26 08:31:54 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:54.380147    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c665249b-007a-4348-8905-c4ba71426d5c-xtables-lock\") pod \"kindnet-vr7fg\" (UID: \"c665249b-007a-4348-8905-c4ba71426d5c\") " pod="kube-system/kindnet-vr7fg"
	Oct 26 08:31:54 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:54.380231    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2gn2\" (UniqueName: \"kubernetes.io/projected/029bb2f9-cc20-4deb-8eca-da1405fd2c84-kube-api-access-t2gn2\") pod \"kube-proxy-m4gfc\" (UID: \"029bb2f9-cc20-4deb-8eca-da1405fd2c84\") " pod="kube-system/kube-proxy-m4gfc"
	Oct 26 08:31:54 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:54.380318    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c665249b-007a-4348-8905-c4ba71426d5c-lib-modules\") pod \"kindnet-vr7fg\" (UID: \"c665249b-007a-4348-8905-c4ba71426d5c\") " pod="kube-system/kindnet-vr7fg"
	Oct 26 08:31:55 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:55.706806    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-m4gfc" podStartSLOduration=1.7067823789999998 podStartE2EDuration="1.706782379s" podCreationTimestamp="2025-10-26 08:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:31:55.706681291 +0000 UTC m=+7.154797836" watchObservedRunningTime="2025-10-26 08:31:55.706782379 +0000 UTC m=+7.154898920"
	Oct 26 08:31:55 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:31:55.731226    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vr7fg" podStartSLOduration=1.731199918 podStartE2EDuration="1.731199918s" podCreationTimestamp="2025-10-26 08:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:31:55.731133582 +0000 UTC m=+7.179250124" watchObservedRunningTime="2025-10-26 08:31:55.731199918 +0000 UTC m=+7.179316460"
	Oct 26 08:32:05 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:32:05.597967    1329 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Oct 26 08:32:05 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:32:05.659390    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmdz9\" (UniqueName: \"kubernetes.io/projected/18fbe340-fefc-49cc-9816-4af780af38c5-kube-api-access-vmdz9\") pod \"coredns-66bc5c9577-h4dk5\" (UID: \"18fbe340-fefc-49cc-9816-4af780af38c5\") " pod="kube-system/coredns-66bc5c9577-h4dk5"
	Oct 26 08:32:05 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:32:05.659454    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a87f2f9f-e47d-4081-b53e-0b0017e791ae-tmp\") pod \"storage-provisioner\" (UID: \"a87f2f9f-e47d-4081-b53e-0b0017e791ae\") " pod="kube-system/storage-provisioner"
	Oct 26 08:32:05 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:32:05.659477    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5trp\" (UniqueName: \"kubernetes.io/projected/a87f2f9f-e47d-4081-b53e-0b0017e791ae-kube-api-access-j5trp\") pod \"storage-provisioner\" (UID: \"a87f2f9f-e47d-4081-b53e-0b0017e791ae\") " pod="kube-system/storage-provisioner"
	Oct 26 08:32:05 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:32:05.659497    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/18fbe340-fefc-49cc-9816-4af780af38c5-config-volume\") pod \"coredns-66bc5c9577-h4dk5\" (UID: \"18fbe340-fefc-49cc-9816-4af780af38c5\") " pod="kube-system/coredns-66bc5c9577-h4dk5"
	Oct 26 08:32:06 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:32:06.747728    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h4dk5" podStartSLOduration=12.747703812 podStartE2EDuration="12.747703812s" podCreationTimestamp="2025-10-26 08:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:32:06.747519804 +0000 UTC m=+18.195636358" watchObservedRunningTime="2025-10-26 08:32:06.747703812 +0000 UTC m=+18.195820353"
	Oct 26 08:32:06 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:32:06.748006    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.747995112 podStartE2EDuration="12.747995112s" podCreationTimestamp="2025-10-26 08:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:32:06.73629441 +0000 UTC m=+18.184410967" watchObservedRunningTime="2025-10-26 08:32:06.747995112 +0000 UTC m=+18.196111657"
	Oct 26 08:32:08 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:32:08.678299    1329 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x58k2\" (UniqueName: \"kubernetes.io/projected/b5bb1b9e-b768-4cd4-94f6-e17e789dd4c0-kube-api-access-x58k2\") pod \"busybox\" (UID: \"b5bb1b9e-b768-4cd4-94f6-e17e789dd4c0\") " pod="default/busybox"
	Oct 26 08:32:10 default-k8s-diff-port-866212 kubelet[1329]: I1026 08:32:10.749504    1329 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.342667888 podStartE2EDuration="2.74948009s" podCreationTimestamp="2025-10-26 08:32:08 +0000 UTC" firstStartedPulling="2025-10-26 08:32:08.955512065 +0000 UTC m=+20.403628591" lastFinishedPulling="2025-10-26 08:32:10.362324271 +0000 UTC m=+21.810440793" observedRunningTime="2025-10-26 08:32:10.74894733 +0000 UTC m=+22.197063872" watchObservedRunningTime="2025-10-26 08:32:10.74948009 +0000 UTC m=+22.197596632"
	
	
	==> storage-provisioner [e187ccdf5696639cb430c553ae8a7e60b57c2082080d57f4a10bc29d41d8822d] <==
	I1026 08:32:06.009015       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 08:32:06.018611       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 08:32:06.018665       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 08:32:06.021350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:06.026259       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:32:06.026431       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 08:32:06.026630       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-866212_8c90eb05-6a7b-4730-85a4-25a3ea1dc5cc!
	I1026 08:32:06.026606       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46264170-5b73-4301-a763-5e3adc5f609e", APIVersion:"v1", ResourceVersion:"405", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-866212_8c90eb05-6a7b-4730-85a4-25a3ea1dc5cc became leader
	W1026 08:32:06.029302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:06.057936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:32:06.126875       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-866212_8c90eb05-6a7b-4730-85a4-25a3ea1dc5cc!
	W1026 08:32:08.061187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:08.068574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:10.072648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:10.077354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:12.081462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:12.088209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:14.091441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:14.095811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:16.099841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:16.104428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:18.109214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:32:18.118941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-866212 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-366970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-366970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (331.050052ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:32:23Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-366970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-366970
helpers_test.go:243: (dbg) docker inspect newest-cni-366970:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018",
	        "Created": "2025-10-26T08:31:59.079010399Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272359,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:31:59.116913671Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018/hostname",
	        "HostsPath": "/var/lib/docker/containers/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018/hosts",
	        "LogPath": "/var/lib/docker/containers/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018-json.log",
	        "Name": "/newest-cni-366970",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-366970:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-366970",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018",
	                "LowerDir": "/var/lib/docker/overlay2/aea0a5ed2ad3415011b41f9205844db626d056ea7edf0ff835d03501b925eccd-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aea0a5ed2ad3415011b41f9205844db626d056ea7edf0ff835d03501b925eccd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aea0a5ed2ad3415011b41f9205844db626d056ea7edf0ff835d03501b925eccd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aea0a5ed2ad3415011b41f9205844db626d056ea7edf0ff835d03501b925eccd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-366970",
	                "Source": "/var/lib/docker/volumes/newest-cni-366970/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-366970",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-366970",
	                "name.minikube.sigs.k8s.io": "newest-cni-366970",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7215a733cf9956a01991e85b39d5d710295dcd5231b31c7549028d614a38c817",
	            "SandboxKey": "/var/run/docker/netns/7215a733cf99",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-366970": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3a:3b:f6:48:3c:86",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19ada62bb6d68780491bac6cfa6c8306dbe7ffb9866d24de190e8d5c662067df",
	                    "EndpointID": "5fcce84f7e0927bf714a15d526ca85064d690d1d8b95f60cc96d63ccc7e4988c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-366970",
	                        "c16db157b89e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-366970 -n newest-cni-366970
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-366970 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-366970 logs -n 25: (1.003479515s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-752315 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ image   │ old-k8s-version-810379 image list --format=json                                                                                                                                                                                               │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p old-k8s-version-810379 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p old-k8s-version-810379                                                                                                                                                                                                                     │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ delete  │ -p old-k8s-version-810379                                                                                                                                                                                                                     │ old-k8s-version-810379       │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ delete  │ -p disable-driver-mounts-209240                                                                                                                                                                                                               │ disable-driver-mounts-209240 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p default-k8s-diff-port-866212 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ image   │ no-preload-001983 image list --format=json                                                                                                                                                                                                    │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p no-preload-001983 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-462840                                                                                                                                                                                                                  │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p no-preload-001983                                                                                                                                                                                                                          │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p no-preload-001983                                                                                                                                                                                                                          │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p auto-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ image   │ embed-certs-752315 image list --format=json                                                                                                                                                                                                   │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ pause   │ -p embed-certs-752315 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ delete  │ -p embed-certs-752315                                                                                                                                                                                                                         │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p embed-certs-752315                                                                                                                                                                                                                         │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p kindnet-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-110992               │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-866212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-866212 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-366970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:32:16
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:32:16.612087  278592 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:32:16.614229  278592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:16.614266  278592 out.go:374] Setting ErrFile to fd 2...
	I1026 08:32:16.614282  278592 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:16.614759  278592 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:32:16.615869  278592 out.go:368] Setting JSON to false
	I1026 08:32:16.617939  278592 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4488,"bootTime":1761463049,"procs":332,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:32:16.618128  278592 start.go:141] virtualization: kvm guest
	I1026 08:32:16.620455  278592 out.go:179] * [kindnet-110992] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:32:16.621967  278592 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:32:16.621971  278592 notify.go:220] Checking for updates...
	I1026 08:32:16.628981  278592 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:32:16.630623  278592 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:32:16.632487  278592 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:32:16.633930  278592 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:32:16.636777  278592 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:32:16.638874  278592 config.go:182] Loaded profile config "auto-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:16.639045  278592 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:16.639211  278592 config.go:182] Loaded profile config "newest-cni-366970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:16.639349  278592 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:32:16.674844  278592 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:32:16.675000  278592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:16.759200  278592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 08:32:16.743141488 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:16.759390  278592 docker.go:318] overlay module found
	I1026 08:32:16.762482  278592 out.go:179] * Using the docker driver based on user configuration
	I1026 08:32:16.763818  278592 start.go:305] selected driver: docker
	I1026 08:32:16.763835  278592 start.go:925] validating driver "docker" against <nil>
	I1026 08:32:16.763847  278592 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:32:16.764484  278592 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:16.854338  278592 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 08:32:16.84246589 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:16.854656  278592 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:32:16.855159  278592 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:32:16.857180  278592 out.go:179] * Using Docker driver with root privileges
	I1026 08:32:16.858510  278592 cni.go:84] Creating CNI manager for "kindnet"
	I1026 08:32:16.858557  278592 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1026 08:32:16.858657  278592 start.go:349] cluster config:
	{Name:kindnet-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunti
me:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentP
ID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:32:16.860071  278592 out.go:179] * Starting "kindnet-110992" primary control-plane node in "kindnet-110992" cluster
	I1026 08:32:16.861301  278592 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:32:16.862891  278592 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:32:16.864220  278592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:16.864298  278592 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:32:16.864352  278592 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:32:16.864367  278592 cache.go:58] Caching tarball of preloaded images
	I1026 08:32:16.864538  278592 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:32:16.864552  278592 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:32:16.864744  278592 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kindnet-110992/config.json ...
	I1026 08:32:16.864782  278592 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kindnet-110992/config.json: {Name:mk6340d63c3702426afb691e46b9d2a183a30ae2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:16.890809  278592 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:32:16.890862  278592 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:32:16.890882  278592 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:32:16.890916  278592 start.go:360] acquireMachinesLock for kindnet-110992: {Name:mk0ffaf9908881b6a0934c083112125f38970f56 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:32:16.891026  278592 start.go:364] duration metric: took 89.576µs to acquireMachinesLock for "kindnet-110992"
	I1026 08:32:16.891053  278592 start.go:93] Provisioning new machine with config: &{Name:kindnet-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kindnet-110992 Namespace:default APIServerHAVIP: APIServerName:m
inikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:kindnet} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:32:16.891143  278592 start.go:125] createHost starting for "" (driver="docker")
	I1026 08:32:17.190887  270203 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 08:32:17.190988  270203 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 08:32:17.191182  270203 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 08:32:17.191323  270203 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 08:32:17.191375  270203 kubeadm.go:318] OS: Linux
	I1026 08:32:17.191466  270203 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 08:32:17.191568  270203 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 08:32:17.191711  270203 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 08:32:17.191804  270203 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 08:32:17.191879  270203 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 08:32:17.191971  270203 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 08:32:17.192060  270203 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 08:32:17.192121  270203 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 08:32:17.192198  270203 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 08:32:17.192334  270203 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 08:32:17.192447  270203 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 08:32:17.192520  270203 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1026 08:32:17.194894  270203 out.go:252]   - Generating certificates and keys ...
	I1026 08:32:17.194993  270203 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 08:32:17.195085  270203 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 08:32:17.195168  270203 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 08:32:17.195244  270203 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 08:32:17.195336  270203 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 08:32:17.195397  270203 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 08:32:17.195468  270203 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 08:32:17.195625  270203 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-366970] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 08:32:17.195692  270203 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 08:32:17.195845  270203 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-366970] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1026 08:32:17.195959  270203 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 08:32:17.196101  270203 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	I1026 08:32:17.196196  270203 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 08:32:17.196406  270203 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 08:32:17.196499  270203 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 08:32:17.196603  270203 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 08:32:17.196723  270203 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 08:32:17.196886  270203 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 08:32:17.196997  270203 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 08:32:17.197105  270203 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 08:32:17.197201  270203 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 08:32:17.199181  270203 out.go:252]   - Booting up control plane ...
	I1026 08:32:17.199317  270203 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 08:32:17.199421  270203 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 08:32:17.199504  270203 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 08:32:17.199635  270203 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 08:32:17.199784  270203 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 08:32:17.199926  270203 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 08:32:17.199999  270203 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 08:32:17.200032  270203 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 08:32:17.200136  270203 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 08:32:17.200219  270203 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 08:32:17.200289  270203 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.00088434s
	I1026 08:32:17.200406  270203 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 08:32:17.200514  270203 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I1026 08:32:17.200640  270203 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 08:32:17.200750  270203 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 08:32:17.200882  270203 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.035122888s
	I1026 08:32:17.200968  270203 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.255250898s
	I1026 08:32:17.201061  270203 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001973335s
	I1026 08:32:17.201215  270203 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:32:17.201386  270203 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:32:17.201442  270203 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:32:17.201654  270203 kubeadm.go:318] [mark-control-plane] Marking the node newest-cni-366970 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:32:17.201761  270203 kubeadm.go:318] [bootstrap-token] Using token: h3axcx.evqrjgvunvsgjdn2
	I1026 08:32:17.208590  270203 out.go:252]   - Configuring RBAC rules ...
	I1026 08:32:17.208734  270203 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 08:32:17.208839  270203 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 08:32:17.209014  270203 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 08:32:17.209173  270203 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 08:32:17.209437  270203 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 08:32:17.209568  270203 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 08:32:17.209702  270203 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 08:32:17.209756  270203 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 08:32:17.209813  270203 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 08:32:17.209819  270203 kubeadm.go:318] 
	I1026 08:32:17.209895  270203 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 08:32:17.209903  270203 kubeadm.go:318] 
	I1026 08:32:17.210011  270203 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 08:32:17.210036  270203 kubeadm.go:318] 
	I1026 08:32:17.210069  270203 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 08:32:17.210149  270203 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 08:32:17.210241  270203 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 08:32:17.210292  270203 kubeadm.go:318] 
	I1026 08:32:17.210369  270203 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 08:32:17.210379  270203 kubeadm.go:318] 
	I1026 08:32:17.210452  270203 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 08:32:17.210465  270203 kubeadm.go:318] 
	I1026 08:32:17.210540  270203 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 08:32:17.210645  270203 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 08:32:17.210723  270203 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 08:32:17.210728  270203 kubeadm.go:318] 
	I1026 08:32:17.210822  270203 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 08:32:17.210912  270203 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 08:32:17.210918  270203 kubeadm.go:318] 
	I1026 08:32:17.211021  270203 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token h3axcx.evqrjgvunvsgjdn2 \
	I1026 08:32:17.211156  270203 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d \
	I1026 08:32:17.211184  270203 kubeadm.go:318] 	--control-plane 
	I1026 08:32:17.211189  270203 kubeadm.go:318] 
	I1026 08:32:17.211301  270203 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 08:32:17.211308  270203 kubeadm.go:318] 
	I1026 08:32:17.211404  270203 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token h3axcx.evqrjgvunvsgjdn2 \
	I1026 08:32:17.211510  270203 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d 
	I1026 08:32:17.211521  270203 cni.go:84] Creating CNI manager for ""
	I1026 08:32:17.211529  270203 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:32:17.213403  270203 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 08:32:17.214756  270203 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 08:32:17.221301  270203 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 08:32:17.221323  270203 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 08:32:17.241553  270203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 08:32:17.615615  270203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:17.615642  270203 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 08:32:17.615761  270203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-366970 minikube.k8s.io/updated_at=2025_10_26T08_32_17_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=newest-cni-366970 minikube.k8s.io/primary=true
	I1026 08:32:17.736191  270203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:17.738358  270203 ops.go:34] apiserver oom_adj: -16
	I1026 08:32:18.236731  270203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:15.763607  273227 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001455744s
	I1026 08:32:15.768444  273227 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 08:32:15.768591  273227 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1026 08:32:15.768723  273227 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 08:32:15.768815  273227 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 08:32:17.597940  273227 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.829383243s
	I1026 08:32:18.175723  273227 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.407296616s
	I1026 08:32:20.269998  273227 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.501537204s
	I1026 08:32:20.283936  273227 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:32:20.302720  273227 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:32:20.314850  273227 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:32:20.315156  273227 kubeadm.go:318] [mark-control-plane] Marking the node auto-110992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:32:20.326067  273227 kubeadm.go:318] [bootstrap-token] Using token: vom680.7344wgspgt7sfdf6
	I1026 08:32:20.327693  273227 out.go:252]   - Configuring RBAC rules ...
	I1026 08:32:20.327858  273227 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 08:32:20.332173  273227 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 08:32:20.342388  273227 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 08:32:20.363747  273227 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 08:32:16.894938  278592 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 08:32:16.895273  278592 start.go:159] libmachine.API.Create for "kindnet-110992" (driver="docker")
	I1026 08:32:16.895306  278592 client.go:168] LocalClient.Create starting
	I1026 08:32:16.895431  278592 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem
	I1026 08:32:16.895477  278592 main.go:141] libmachine: Decoding PEM data...
	I1026 08:32:16.895501  278592 main.go:141] libmachine: Parsing certificate...
	I1026 08:32:16.895582  278592 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem
	I1026 08:32:16.895617  278592 main.go:141] libmachine: Decoding PEM data...
	I1026 08:32:16.895629  278592 main.go:141] libmachine: Parsing certificate...
	I1026 08:32:16.896071  278592 cli_runner.go:164] Run: docker network inspect kindnet-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 08:32:16.918357  278592 cli_runner.go:211] docker network inspect kindnet-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 08:32:16.918435  278592 network_create.go:284] running [docker network inspect kindnet-110992] to gather additional debugging logs...
	I1026 08:32:16.918457  278592 cli_runner.go:164] Run: docker network inspect kindnet-110992
	W1026 08:32:16.940528  278592 cli_runner.go:211] docker network inspect kindnet-110992 returned with exit code 1
	I1026 08:32:16.940560  278592 network_create.go:287] error running [docker network inspect kindnet-110992]: docker network inspect kindnet-110992: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network kindnet-110992 not found
	I1026 08:32:16.940749  278592 network_create.go:289] output of [docker network inspect kindnet-110992]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network kindnet-110992 not found
	
	** /stderr **
	I1026 08:32:16.940925  278592 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:32:16.964706  278592 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c18b67b7e42d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:70:41:72:e4:6d} reservation:<nil>}
	I1026 08:32:16.965625  278592 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd6ed9f615a5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:78:96:65:8c:60} reservation:<nil>}
	I1026 08:32:16.966566  278592 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2a983bf4577 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:62:ae:31:43:82} reservation:<nil>}
	I1026 08:32:16.967382  278592 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-e478294c0fcd IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:3a:92:08:ce:97:6e} reservation:<nil>}
	I1026 08:32:16.968083  278592 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-19ada62bb6d6 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ca:ce:bc:92:3d:7e} reservation:<nil>}
	I1026 08:32:16.968684  278592 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-b6895eb84e54 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:32:d6:42:33:8d:0a} reservation:<nil>}
	I1026 08:32:16.969677  278592 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001edcd70}
	I1026 08:32:16.969707  278592 network_create.go:124] attempt to create docker network kindnet-110992 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1026 08:32:16.969758  278592 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=kindnet-110992 kindnet-110992
	I1026 08:32:17.048088  278592 network_create.go:108] docker network kindnet-110992 192.168.103.0/24 created
	I1026 08:32:17.048125  278592 kic.go:121] calculated static IP "192.168.103.2" for the "kindnet-110992" container
	I1026 08:32:17.048218  278592 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 08:32:17.072872  278592 cli_runner.go:164] Run: docker volume create kindnet-110992 --label name.minikube.sigs.k8s.io=kindnet-110992 --label created_by.minikube.sigs.k8s.io=true
	I1026 08:32:17.099479  278592 oci.go:103] Successfully created a docker volume kindnet-110992
	I1026 08:32:17.099631  278592 cli_runner.go:164] Run: docker run --rm --name kindnet-110992-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kindnet-110992 --entrypoint /usr/bin/test -v kindnet-110992:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 08:32:17.651698  278592 oci.go:107] Successfully prepared a docker volume kindnet-110992
	I1026 08:32:17.651763  278592 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:17.651787  278592 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 08:32:17.651854  278592 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v kindnet-110992:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 08:32:18.737193  270203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:19.237275  270203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:19.737006  270203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:20.236602  270203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:20.737002  270203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:21.237131  270203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:21.736293  270203 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:21.868481  270203 kubeadm.go:1113] duration metric: took 4.2529268s to wait for elevateKubeSystemPrivileges
	I1026 08:32:21.868515  270203 kubeadm.go:402] duration metric: took 14.947286222s to StartCluster
	I1026 08:32:21.868535  270203 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:21.868605  270203 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:32:21.870052  270203 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:21.870398  270203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 08:32:21.870411  270203 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:32:21.870509  270203 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:32:21.870605  270203 config.go:182] Loaded profile config "newest-cni-366970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:21.870613  270203 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-366970"
	I1026 08:32:21.870633  270203 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-366970"
	I1026 08:32:21.870676  270203 host.go:66] Checking if "newest-cni-366970" exists ...
	I1026 08:32:21.870635  270203 addons.go:69] Setting default-storageclass=true in profile "newest-cni-366970"
	I1026 08:32:21.870768  270203 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-366970"
	I1026 08:32:21.871205  270203 cli_runner.go:164] Run: docker container inspect newest-cni-366970 --format={{.State.Status}}
	I1026 08:32:21.871233  270203 cli_runner.go:164] Run: docker container inspect newest-cni-366970 --format={{.State.Status}}
	I1026 08:32:21.895167  270203 addons.go:238] Setting addon default-storageclass=true in "newest-cni-366970"
	I1026 08:32:21.895214  270203 host.go:66] Checking if "newest-cni-366970" exists ...
	I1026 08:32:21.895742  270203 cli_runner.go:164] Run: docker container inspect newest-cni-366970 --format={{.State.Status}}
	I1026 08:32:21.915960  270203 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:32:21.915986  270203 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:32:21.916037  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:21.939915  270203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:22.042994  270203 out.go:179] * Verifying Kubernetes components...
	I1026 08:32:22.043161  270203 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:32:20.492669  273227 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 08:32:20.515472  273227 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 08:32:20.805950  273227 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 08:32:22.085365  273227 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 08:32:22.485631  273227 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 08:32:22.487050  273227 kubeadm.go:318] 
	I1026 08:32:22.487147  273227 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 08:32:22.487155  273227 kubeadm.go:318] 
	I1026 08:32:22.487270  273227 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 08:32:22.487295  273227 kubeadm.go:318] 
	I1026 08:32:22.487324  273227 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 08:32:22.487405  273227 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 08:32:22.487467  273227 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 08:32:22.487475  273227 kubeadm.go:318] 
	I1026 08:32:22.487550  273227 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 08:32:22.487559  273227 kubeadm.go:318] 
	I1026 08:32:22.487610  273227 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 08:32:22.487615  273227 kubeadm.go:318] 
	I1026 08:32:22.487676  273227 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 08:32:22.487768  273227 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 08:32:22.487853  273227 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 08:32:22.487860  273227 kubeadm.go:318] 
	I1026 08:32:22.488014  273227 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 08:32:22.488108  273227 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 08:32:22.488115  273227 kubeadm.go:318] 
	I1026 08:32:22.488214  273227 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token vom680.7344wgspgt7sfdf6 \
	I1026 08:32:22.488357  273227 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d \
	I1026 08:32:22.488387  273227 kubeadm.go:318] 	--control-plane 
	I1026 08:32:22.488393  273227 kubeadm.go:318] 
	I1026 08:32:22.488491  273227 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 08:32:22.488503  273227 kubeadm.go:318] 
	I1026 08:32:22.488593  273227 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token vom680.7344wgspgt7sfdf6 \
	I1026 08:32:22.488709  273227 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d 
	I1026 08:32:22.493045  273227 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 08:32:22.493197  273227 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 08:32:22.493243  273227 cni.go:84] Creating CNI manager for ""
	I1026 08:32:22.493289  273227 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:32:22.549736  273227 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 08:32:22.051527  270203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:32:22.063484  270203 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:32:22.063534  270203 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:32:22.063612  270203 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:22.064482  270203 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:32:22.088952  270203 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33091 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:22.160589  270203 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 08:32:22.253114  270203 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:32:22.518905  270203 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:32:22.687876  270203 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 08:32:22.755442  270203 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:32:22.755524  270203 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:32:22.755801  270203 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I1026 08:32:22.757740  270203 addons.go:514] duration metric: took 887.237656ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I1026 08:32:22.773237  270203 api_server.go:72] duration metric: took 902.786234ms to wait for apiserver process to appear ...
	I1026 08:32:22.773288  270203 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:32:22.773311  270203 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:22.780174  270203 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 08:32:22.781894  270203 api_server.go:141] control plane version: v1.34.1
	I1026 08:32:22.781926  270203 api_server.go:131] duration metric: took 8.630003ms to wait for apiserver health ...
	I1026 08:32:22.781979  270203 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:32:22.786266  270203 system_pods.go:59] 8 kube-system pods found
	I1026 08:32:22.786322  270203 system_pods.go:61] "coredns-66bc5c9577-9xk4x" [4d2bf056-0455-412c-ab4c-5c5680aff306] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 08:32:22.786343  270203 system_pods.go:61] "etcd-newest-cni-366970" [5879f65b-4bc9-45bb-b7ea-97a3f98a0854] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:32:22.786359  270203 system_pods.go:61] "kindnet-vzchv" [1a35b08e-08fd-4546-b4c0-79f6e3f3f29b] Pending / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1026 08:32:22.786370  270203 system_pods.go:61] "kube-apiserver-newest-cni-366970" [6a35c9e5-f940-4ed4-844c-6a1314e1a01d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:32:22.786379  270203 system_pods.go:61] "kube-controller-manager-newest-cni-366970" [e32bbffb-6e52-422f-aedf-a15bd47f2e98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:32:22.786388  270203 system_pods.go:61] "kube-proxy-t2z7c" [73aa16de-9d34-4a0f-9c14-8ec0306d69f6] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1026 08:32:22.786396  270203 system_pods.go:61] "kube-scheduler-newest-cni-366970" [ad5a05b4-584f-4bd2-9f5b-1635269c14d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:32:22.786404  270203 system_pods.go:61] "storage-provisioner" [1f9d7ffb-20ea-4a1f-a5c0-7b8b0ab3e7b0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 08:32:22.786413  270203 system_pods.go:74] duration metric: took 4.42674ms to wait for pod list to return data ...
	I1026 08:32:22.786425  270203 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:32:22.790328  270203 default_sa.go:45] found service account: "default"
	I1026 08:32:22.790352  270203 default_sa.go:55] duration metric: took 3.914293ms for default service account to be created ...
	I1026 08:32:22.790367  270203 kubeadm.go:586] duration metric: took 919.921832ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 08:32:22.790387  270203 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:32:22.792762  270203 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:32:22.792793  270203 node_conditions.go:123] node cpu capacity is 8
	I1026 08:32:22.792810  270203 node_conditions.go:105] duration metric: took 2.417316ms to run NodePressure ...
	I1026 08:32:22.792825  270203 start.go:241] waiting for startup goroutines ...
	I1026 08:32:23.192853  270203 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-366970" context rescaled to 1 replicas
	I1026 08:32:23.192897  270203 start.go:246] waiting for cluster config update ...
	I1026 08:32:23.192913  270203 start.go:255] writing updated cluster config ...
	I1026 08:32:23.193187  270203 ssh_runner.go:195] Run: rm -f paused
	I1026 08:32:23.256833  270203 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:32:23.258794  270203 out.go:179] * Done! kubectl is now configured to use "newest-cni-366970" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.569676842Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.573935828Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=6f69518c-2ca0-4223-843e-890343bbdd92 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.581083258Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.589043977Z" level=info msg="Ran pod sandbox 87806f6f7b70f4db36522c604a0787ff17cc40ee626b8f96385b145cc756ba85 with infra container: kube-system/kindnet-vzchv/POD" id=6f69518c-2ca0-4223-843e-890343bbdd92 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.589820823Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=68178af4-35a1-4c40-b0b1-622afee5fd1f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.597845791Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.599988031Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=f7c82b60-8194-4b38-8c8d-f64a948f82e1 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.605985032Z" level=info msg="Ran pod sandbox 58fcf5d38d5f6b3a6e00a96b81fea0b1ce2aed60940443b5da9433e86188e3b1 with infra container: kube-system/kube-proxy-t2z7c/POD" id=68178af4-35a1-4c40-b0b1-622afee5fd1f name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.612781084Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=62d0b3f6-89d0-4590-ad0e-e162b04cf003 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.617446317Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=e683170b-72eb-4a60-b943-0fb6d23f1a9a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.625317676Z" level=info msg="Creating container: kube-system/kindnet-vzchv/kindnet-cni" id=744e4072-a9b0-4845-b283-4673042e8e97 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.626232213Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9b88bbc6-6ddd-47ee-bff2-bc128a7aba51 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.628231526Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.639802604Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.640481755Z" level=info msg="Creating container: kube-system/kube-proxy-t2z7c/kube-proxy" id=60e53a06-b75e-4386-8599-2cbb6ca58544 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.640648951Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.640773902Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.653324059Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.65393244Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.690780202Z" level=info msg="Created container 91c8e2a81783d613a82016d43c7e703b4226848eb5fe1024f17e428b7d344b87: kube-system/kindnet-vzchv/kindnet-cni" id=744e4072-a9b0-4845-b283-4673042e8e97 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.692189158Z" level=info msg="Starting container: 91c8e2a81783d613a82016d43c7e703b4226848eb5fe1024f17e428b7d344b87" id=e8981959-ea82-41ce-960d-2e9c75a24ac0 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.694812862Z" level=info msg="Started container" PID=1582 containerID=91c8e2a81783d613a82016d43c7e703b4226848eb5fe1024f17e428b7d344b87 description=kube-system/kindnet-vzchv/kindnet-cni id=e8981959-ea82-41ce-960d-2e9c75a24ac0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=87806f6f7b70f4db36522c604a0787ff17cc40ee626b8f96385b145cc756ba85
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.707324186Z" level=info msg="Created container 8d7460029780b53f512e4488faac5aa1d24e5dc72a26dcf310a114ee4e790c3e: kube-system/kube-proxy-t2z7c/kube-proxy" id=60e53a06-b75e-4386-8599-2cbb6ca58544 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.708482726Z" level=info msg="Starting container: 8d7460029780b53f512e4488faac5aa1d24e5dc72a26dcf310a114ee4e790c3e" id=4034dc0d-11d4-44b7-86ac-7dc100e1e4ab name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:32:22 newest-cni-366970 crio[777]: time="2025-10-26T08:32:22.712215474Z" level=info msg="Started container" PID=1585 containerID=8d7460029780b53f512e4488faac5aa1d24e5dc72a26dcf310a114ee4e790c3e description=kube-system/kube-proxy-t2z7c/kube-proxy id=4034dc0d-11d4-44b7-86ac-7dc100e1e4ab name=/runtime.v1.RuntimeService/StartContainer sandboxID=58fcf5d38d5f6b3a6e00a96b81fea0b1ce2aed60940443b5da9433e86188e3b1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8d7460029780b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   58fcf5d38d5f6       kube-proxy-t2z7c                            kube-system
	91c8e2a81783d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   87806f6f7b70f       kindnet-vzchv                               kube-system
	ce4700b96f10f       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   d93c7decfdd97       kube-scheduler-newest-cni-366970            kube-system
	8a86e4dd08a6e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   9303325e03869       kube-apiserver-newest-cni-366970            kube-system
	f93aea3113444       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   93d541a3e962b       kube-controller-manager-newest-cni-366970   kube-system
	99bae6c2c4635       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   f66ec4366662c       etcd-newest-cni-366970                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-366970
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-366970
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=newest-cni-366970
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_32_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:32:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-366970
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:32:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:32:16 +0000   Sun, 26 Oct 2025 08:32:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:32:16 +0000   Sun, 26 Oct 2025 08:32:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:32:16 +0000   Sun, 26 Oct 2025 08:32:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 26 Oct 2025 08:32:16 +0000   Sun, 26 Oct 2025 08:32:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-366970
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                6a456af2-76d6-4f3f-b16f-fdf9a4915e23
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-366970                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         9s
	  kube-system                 kindnet-vzchv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3s
	  kube-system                 kube-apiserver-newest-cni-366970             250m (3%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-controller-manager-newest-cni-366970    200m (2%)     0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 kube-proxy-t2z7c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3s
	  kube-system                 kube-scheduler-newest-cni-366970             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-366970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-366970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-366970 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-366970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-366970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-366970 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-366970 event: Registered Node newest-cni-366970 in Controller
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [99bae6c2c463553fe0e3118ab99c5860a63535453a8df1ac57e55153fe8f694d] <==
	{"level":"warn","ts":"2025-10-26T08:32:22.146522Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"137.621197ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" limit:1 ","response":"range_response_count:1 size:193"}
	{"level":"warn","ts":"2025-10-26T08:32:22.146558Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"187.0485ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-cidrs-controller\" limit:1 ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-10-26T08:32:22.146593Z","caller":"traceutil/trace.go:172","msg":"trace[2121674939] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:348; }","duration":"137.708129ms","start":"2025-10-26T08:32:22.008869Z","end":"2025-10-26T08:32:22.146577Z","steps":["trace[2121674939] 'agreement among raft nodes before linearized reading'  (duration: 137.494005ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:22.146593Z","caller":"traceutil/trace.go:172","msg":"trace[285201036] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-cidrs-controller; range_end:; response_count:1; response_revision:348; }","duration":"187.086679ms","start":"2025-10-26T08:32:21.959497Z","end":"2025-10-26T08:32:22.146584Z","steps":["trace[285201036] 'agreement among raft nodes before linearized reading'  (duration: 186.892222ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T08:32:22.146808Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"199.035143ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-10-26T08:32:22.146844Z","caller":"traceutil/trace.go:172","msg":"trace[1177544417] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:348; }","duration":"199.095597ms","start":"2025-10-26T08:32:21.947739Z","end":"2025-10-26T08:32:22.146834Z","steps":["trace[1177544417] 'agreement among raft nodes before linearized reading'  (duration: 198.951226ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:22.147100Z","caller":"traceutil/trace.go:172","msg":"trace[2012673401] transaction","detail":"{read_only:false; response_revision:352; number_of_response:1; }","duration":"209.254848ms","start":"2025-10-26T08:32:21.937832Z","end":"2025-10-26T08:32:22.147087Z","steps":["trace[2012673401] 'process raft request'  (duration: 209.221165ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:22.147114Z","caller":"traceutil/trace.go:172","msg":"trace[2111736689] transaction","detail":"{read_only:false; response_revision:350; number_of_response:1; }","duration":"211.726517ms","start":"2025-10-26T08:32:21.935362Z","end":"2025-10-26T08:32:22.147089Z","steps":["trace[2111736689] 'process raft request'  (duration: 211.617993ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:22.147146Z","caller":"traceutil/trace.go:172","msg":"trace[1155573682] transaction","detail":"{read_only:false; response_revision:351; number_of_response:1; }","duration":"210.366521ms","start":"2025-10-26T08:32:21.936767Z","end":"2025-10-26T08:32:22.147134Z","steps":["trace[1155573682] 'process raft request'  (duration: 210.249453ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:22.147375Z","caller":"traceutil/trace.go:172","msg":"trace[465848266] transaction","detail":"{read_only:false; response_revision:349; number_of_response:1; }","duration":"212.476948ms","start":"2025-10-26T08:32:21.934889Z","end":"2025-10-26T08:32:22.147366Z","steps":["trace[465848266] 'process raft request'  (duration: 211.986777ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:22.254917Z","caller":"traceutil/trace.go:172","msg":"trace[1443879026] transaction","detail":"{read_only:false; response_revision:359; number_of_response:1; }","duration":"100.067651ms","start":"2025-10-26T08:32:22.154626Z","end":"2025-10-26T08:32:22.254694Z","steps":["trace[1443879026] 'process raft request'  (duration: 78.522628ms)","trace[1443879026] 'compare'  (duration: 20.776092ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:32:22.255395Z","caller":"traceutil/trace.go:172","msg":"trace[813434286] transaction","detail":"{read_only:false; response_revision:360; number_of_response:1; }","duration":"100.2766ms","start":"2025-10-26T08:32:22.155104Z","end":"2025-10-26T08:32:22.255380Z","steps":["trace[813434286] 'process raft request'  (duration: 99.22479ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:22.397131Z","caller":"traceutil/trace.go:172","msg":"trace[2077705725] linearizableReadLoop","detail":"{readStateIndex:378; appliedIndex:378; }","duration":"123.167456ms","start":"2025-10-26T08:32:22.273924Z","end":"2025-10-26T08:32:22.397092Z","steps":["trace[2077705725] 'read index received'  (duration: 123.15807ms)","trace[2077705725] 'applied index is now lower than readState.Index'  (duration: 8.121µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:22.507355Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"233.394535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" limit:1 ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2025-10-26T08:32:22.507430Z","caller":"traceutil/trace.go:172","msg":"trace[666133262] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:366; }","duration":"233.493942ms","start":"2025-10-26T08:32:22.273918Z","end":"2025-10-26T08:32:22.507412Z","steps":["trace[666133262] 'agreement among raft nodes before linearized reading'  (duration: 123.296565ms)","trace[666133262] 'range keys from in-memory index tree'  (duration: 109.966835ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:22.507609Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.215941ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722596631274292355 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/servicecidrs/kubernetes\" mod_revision:363 > success:<request_put:<key:\"/registry/servicecidrs/kubernetes\" value_size:951 >> failure:<request_range:<key:\"/registry/servicecidrs/kubernetes\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T08:32:22.507792Z","caller":"traceutil/trace.go:172","msg":"trace[169526579] transaction","detail":"{read_only:false; response_revision:367; number_of_response:1; }","duration":"237.415558ms","start":"2025-10-26T08:32:22.270361Z","end":"2025-10-26T08:32:22.507777Z","steps":["trace[169526579] 'process raft request'  (duration: 126.84558ms)","trace[169526579] 'compare'  (duration: 110.091444ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:22.507942Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"230.040552ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" limit:1 ","response":"range_response_count:1 size:693"}
	{"level":"info","ts":"2025-10-26T08:32:22.507985Z","caller":"traceutil/trace.go:172","msg":"trace[1725542805] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:369; }","duration":"230.087736ms","start":"2025-10-26T08:32:22.277885Z","end":"2025-10-26T08:32:22.507973Z","steps":["trace[1725542805] 'agreement among raft nodes before linearized reading'  (duration: 229.964706ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:22.507985Z","caller":"traceutil/trace.go:172","msg":"trace[1356577346] transaction","detail":"{read_only:false; response_revision:372; number_of_response:1; }","duration":"228.106547ms","start":"2025-10-26T08:32:22.279869Z","end":"2025-10-26T08:32:22.507975Z","steps":["trace[1356577346] 'process raft request'  (duration: 228.069698ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:22.507818Z","caller":"traceutil/trace.go:172","msg":"trace[1024397964] linearizableReadLoop","detail":"{readStateIndex:379; appliedIndex:378; }","duration":"110.610289ms","start":"2025-10-26T08:32:22.397192Z","end":"2025-10-26T08:32:22.507803Z","steps":["trace[1024397964] 'read index received'  (duration: 90.865398ms)","trace[1024397964] 'applied index is now lower than readState.Index'  (duration: 19.743282ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:32:22.508172Z","caller":"traceutil/trace.go:172","msg":"trace[1664573928] transaction","detail":"{read_only:false; response_revision:371; number_of_response:1; }","duration":"234.157485ms","start":"2025-10-26T08:32:22.274005Z","end":"2025-10-26T08:32:22.508162Z","steps":["trace[1664573928] 'process raft request'  (duration: 233.886451ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:22.508339Z","caller":"traceutil/trace.go:172","msg":"trace[1221310398] transaction","detail":"{read_only:false; response_revision:369; number_of_response:1; }","duration":"237.364257ms","start":"2025-10-26T08:32:22.270966Z","end":"2025-10-26T08:32:22.508330Z","steps":["trace[1221310398] 'process raft request'  (duration: 236.841356ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:22.508453Z","caller":"traceutil/trace.go:172","msg":"trace[1065051923] transaction","detail":"{read_only:false; response_revision:370; number_of_response:1; }","duration":"235.920448ms","start":"2025-10-26T08:32:22.272524Z","end":"2025-10-26T08:32:22.508445Z","steps":["trace[1065051923] 'process raft request'  (duration: 235.327824ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:22.507950Z","caller":"traceutil/trace.go:172","msg":"trace[1501715359] transaction","detail":"{read_only:false; response_revision:368; number_of_response:1; }","duration":"237.253005ms","start":"2025-10-26T08:32:22.270681Z","end":"2025-10-26T08:32:22.507934Z","steps":["trace[1501715359] 'process raft request'  (duration: 237.060146ms)"],"step_count":1}
	
	
	==> kernel <==
	 08:32:24 up  1:14,  0 user,  load average: 4.44, 3.52, 2.21
	Linux newest-cni-366970 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [91c8e2a81783d613a82016d43c7e703b4226848eb5fe1024f17e428b7d344b87] <==
	I1026 08:32:22.936141       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:32:22.964102       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 08:32:22.964278       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:32:22.964300       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:32:22.964333       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:32:23Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:32:23.263971       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:32:23.263998       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:32:23.264010       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:32:23.264176       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:32:23.664123       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:32:23.664151       1 metrics.go:72] Registering metrics
	I1026 08:32:23.664209       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [8a86e4dd08a6e245d098691686ac218246b330a86bc3dd4560a44c654b63a218] <==
	I1026 08:32:14.115583       1 policy_source.go:240] refreshing policies
	E1026 08:32:14.120481       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1026 08:32:14.167351       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 08:32:14.168940       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:32:14.169452       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1026 08:32:14.175143       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:32:14.175698       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:32:14.294273       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:32:14.968484       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1026 08:32:14.973856       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1026 08:32:14.973876       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:32:15.549135       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:32:15.591027       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:32:15.673754       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1026 08:32:15.680527       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1026 08:32:15.681761       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:32:15.686473       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:32:16.014001       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:32:16.588584       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:32:16.607423       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1026 08:32:16.620642       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1026 08:32:21.725228       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:32:21.800428       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1026 08:32:22.258930       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1026 08:32:22.509790       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [f93aea3113444f20032378fe27c0d6975cddcc59b8d472ae20974caacb0b70a2] <==
	I1026 08:32:21.061735       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 08:32:21.062391       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 08:32:21.062748       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 08:32:21.062817       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 08:32:21.062856       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 08:32:21.063609       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 08:32:21.063806       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 08:32:21.063872       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 08:32:21.064039       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 08:32:21.065790       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 08:32:21.068943       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1026 08:32:21.069005       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 08:32:21.071304       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:32:21.071369       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 08:32:21.080589       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 08:32:21.086387       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:32:21.090503       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 08:32:21.093377       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1026 08:32:21.104640       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 08:32:21.104795       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 08:32:21.111158       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 08:32:21.111295       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 08:32:21.111378       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-366970"
	I1026 08:32:21.111430       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 08:32:21.154390       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-366970" podCIDRs=["10.42.0.0/24"]
	
	
	==> kube-proxy [8d7460029780b53f512e4488faac5aa1d24e5dc72a26dcf310a114ee4e790c3e] <==
	I1026 08:32:22.775654       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:32:22.842788       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:32:22.942898       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:32:22.943047       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 08:32:22.943191       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:32:22.967313       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:32:22.967372       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:32:22.975076       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:32:22.975543       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:32:22.975581       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:32:22.978652       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:32:22.978737       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:32:22.978755       1 config.go:200] "Starting service config controller"
	I1026 08:32:22.978778       1 config.go:309] "Starting node config controller"
	I1026 08:32:22.978787       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:32:22.978763       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:32:22.978796       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:32:22.978782       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:32:22.978797       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:32:23.078944       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 08:32:23.079033       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:32:23.079039       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [ce4700b96f10f2c6b04e99f6c0614e949359e51e8ffe1611813e91c19ec7ae37] <==
	E1026 08:32:14.023439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1026 08:32:14.023521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1026 08:32:14.023661       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 08:32:14.023739       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1026 08:32:14.023806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:32:14.023891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 08:32:14.023960       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1026 08:32:14.024380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:32:14.024926       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:32:14.025098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1026 08:32:14.025175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:32:14.025206       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:32:14.843133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1026 08:32:14.924263       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1026 08:32:14.960018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1026 08:32:15.033243       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1026 08:32:15.063608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1026 08:32:15.067843       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1026 08:32:15.079239       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1026 08:32:15.180898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1026 08:32:15.229727       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1026 08:32:15.251003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1026 08:32:15.286413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1026 08:32:15.290531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I1026 08:32:18.011428       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:32:16 newest-cni-366970 kubelet[1302]: I1026 08:32:16.672103    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/46a6f50e0de3a98863111d7c36ff68f3-usr-local-share-ca-certificates\") pod \"kube-apiserver-newest-cni-366970\" (UID: \"46a6f50e0de3a98863111d7c36ff68f3\") " pod="kube-system/kube-apiserver-newest-cni-366970"
	Oct 26 08:32:16 newest-cni-366970 kubelet[1302]: I1026 08:32:16.672138    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d6329592fd0086b2310edecb05fc6a8c-flexvolume-dir\") pod \"kube-controller-manager-newest-cni-366970\" (UID: \"d6329592fd0086b2310edecb05fc6a8c\") " pod="kube-system/kube-controller-manager-newest-cni-366970"
	Oct 26 08:32:16 newest-cni-366970 kubelet[1302]: I1026 08:32:16.672167    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d6329592fd0086b2310edecb05fc6a8c-usr-share-ca-certificates\") pod \"kube-controller-manager-newest-cni-366970\" (UID: \"d6329592fd0086b2310edecb05fc6a8c\") " pod="kube-system/kube-controller-manager-newest-cni-366970"
	Oct 26 08:32:17 newest-cni-366970 kubelet[1302]: I1026 08:32:17.446725    1302 apiserver.go:52] "Watching apiserver"
	Oct 26 08:32:17 newest-cni-366970 kubelet[1302]: I1026 08:32:17.466905    1302 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 26 08:32:17 newest-cni-366970 kubelet[1302]: I1026 08:32:17.485470    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-366970" podStartSLOduration=1.485447013 podStartE2EDuration="1.485447013s" podCreationTimestamp="2025-10-26 08:32:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:32:17.485334226 +0000 UTC m=+1.126605307" watchObservedRunningTime="2025-10-26 08:32:17.485447013 +0000 UTC m=+1.126718097"
	Oct 26 08:32:17 newest-cni-366970 kubelet[1302]: I1026 08:32:17.520525    1302 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-366970"
	Oct 26 08:32:17 newest-cni-366970 kubelet[1302]: I1026 08:32:17.521235    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-366970" podStartSLOduration=2.5211962310000002 podStartE2EDuration="2.521196231s" podCreationTimestamp="2025-10-26 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:32:17.500689927 +0000 UTC m=+1.141961010" watchObservedRunningTime="2025-10-26 08:32:17.521196231 +0000 UTC m=+1.162467312"
	Oct 26 08:32:17 newest-cni-366970 kubelet[1302]: I1026 08:32:17.521505    1302 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-366970"
	Oct 26 08:32:17 newest-cni-366970 kubelet[1302]: E1026 08:32:17.545355    1302 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-366970\" already exists" pod="kube-system/etcd-newest-cni-366970"
	Oct 26 08:32:17 newest-cni-366970 kubelet[1302]: E1026 08:32:17.545557    1302 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-366970\" already exists" pod="kube-system/kube-apiserver-newest-cni-366970"
	Oct 26 08:32:17 newest-cni-366970 kubelet[1302]: I1026 08:32:17.562808    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-366970" podStartSLOduration=2.562779864 podStartE2EDuration="2.562779864s" podCreationTimestamp="2025-10-26 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:32:17.521847431 +0000 UTC m=+1.163118515" watchObservedRunningTime="2025-10-26 08:32:17.562779864 +0000 UTC m=+1.204050945"
	Oct 26 08:32:17 newest-cni-366970 kubelet[1302]: I1026 08:32:17.594017    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-366970" podStartSLOduration=2.593991597 podStartE2EDuration="2.593991597s" podCreationTimestamp="2025-10-26 08:32:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:32:17.564608194 +0000 UTC m=+1.205879277" watchObservedRunningTime="2025-10-26 08:32:17.593991597 +0000 UTC m=+1.235262679"
	Oct 26 08:32:21 newest-cni-366970 kubelet[1302]: I1026 08:32:21.174959    1302 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 26 08:32:21 newest-cni-366970 kubelet[1302]: I1026 08:32:21.175744    1302 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 26 08:32:22 newest-cni-366970 kubelet[1302]: I1026 08:32:22.313054    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1a35b08e-08fd-4546-b4c0-79f6e3f3f29b-cni-cfg\") pod \"kindnet-vzchv\" (UID: \"1a35b08e-08fd-4546-b4c0-79f6e3f3f29b\") " pod="kube-system/kindnet-vzchv"
	Oct 26 08:32:22 newest-cni-366970 kubelet[1302]: I1026 08:32:22.314116    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73aa16de-9d34-4a0f-9c14-8ec0306d69f6-lib-modules\") pod \"kube-proxy-t2z7c\" (UID: \"73aa16de-9d34-4a0f-9c14-8ec0306d69f6\") " pod="kube-system/kube-proxy-t2z7c"
	Oct 26 08:32:22 newest-cni-366970 kubelet[1302]: I1026 08:32:22.314297    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a35b08e-08fd-4546-b4c0-79f6e3f3f29b-lib-modules\") pod \"kindnet-vzchv\" (UID: \"1a35b08e-08fd-4546-b4c0-79f6e3f3f29b\") " pod="kube-system/kindnet-vzchv"
	Oct 26 08:32:22 newest-cni-366970 kubelet[1302]: I1026 08:32:22.314349    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xrh7\" (UniqueName: \"kubernetes.io/projected/1a35b08e-08fd-4546-b4c0-79f6e3f3f29b-kube-api-access-9xrh7\") pod \"kindnet-vzchv\" (UID: \"1a35b08e-08fd-4546-b4c0-79f6e3f3f29b\") " pod="kube-system/kindnet-vzchv"
	Oct 26 08:32:22 newest-cni-366970 kubelet[1302]: I1026 08:32:22.314384    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73aa16de-9d34-4a0f-9c14-8ec0306d69f6-xtables-lock\") pod \"kube-proxy-t2z7c\" (UID: \"73aa16de-9d34-4a0f-9c14-8ec0306d69f6\") " pod="kube-system/kube-proxy-t2z7c"
	Oct 26 08:32:22 newest-cni-366970 kubelet[1302]: I1026 08:32:22.314414    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a35b08e-08fd-4546-b4c0-79f6e3f3f29b-xtables-lock\") pod \"kindnet-vzchv\" (UID: \"1a35b08e-08fd-4546-b4c0-79f6e3f3f29b\") " pod="kube-system/kindnet-vzchv"
	Oct 26 08:32:22 newest-cni-366970 kubelet[1302]: I1026 08:32:22.314436    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htzxq\" (UniqueName: \"kubernetes.io/projected/73aa16de-9d34-4a0f-9c14-8ec0306d69f6-kube-api-access-htzxq\") pod \"kube-proxy-t2z7c\" (UID: \"73aa16de-9d34-4a0f-9c14-8ec0306d69f6\") " pod="kube-system/kube-proxy-t2z7c"
	Oct 26 08:32:22 newest-cni-366970 kubelet[1302]: I1026 08:32:22.314465    1302 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/73aa16de-9d34-4a0f-9c14-8ec0306d69f6-kube-proxy\") pod \"kube-proxy-t2z7c\" (UID: \"73aa16de-9d34-4a0f-9c14-8ec0306d69f6\") " pod="kube-system/kube-proxy-t2z7c"
	Oct 26 08:32:23 newest-cni-366970 kubelet[1302]: I1026 08:32:23.548627    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t2z7c" podStartSLOduration=2.5486006960000003 podStartE2EDuration="2.548600696s" podCreationTimestamp="2025-10-26 08:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:32:23.548542784 +0000 UTC m=+7.189813866" watchObservedRunningTime="2025-10-26 08:32:23.548600696 +0000 UTC m=+7.189871777"
	Oct 26 08:32:23 newest-cni-366970 kubelet[1302]: I1026 08:32:23.579906    1302 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vzchv" podStartSLOduration=2.579882064 podStartE2EDuration="2.579882064s" podCreationTimestamp="2025-10-26 08:32:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-26 08:32:23.564487965 +0000 UTC m=+7.205759047" watchObservedRunningTime="2025-10-26 08:32:23.579882064 +0000 UTC m=+7.221153145"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-366970 -n newest-cni-366970
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-366970 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-9xk4x storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-366970 describe pod coredns-66bc5c9577-9xk4x storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-366970 describe pod coredns-66bc5c9577-9xk4x storage-provisioner: exit status 1 (64.762937ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-9xk4x" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-366970 describe pod coredns-66bc5c9577-9xk4x storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (6.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-366970 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-366970 --alsologtostderr -v=1: exit status 80 (2.427027525s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-366970 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:32:40.120295  286915 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:32:40.120599  286915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:40.120610  286915 out.go:374] Setting ErrFile to fd 2...
	I1026 08:32:40.120614  286915 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:40.120851  286915 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:32:40.121111  286915 out.go:368] Setting JSON to false
	I1026 08:32:40.121147  286915 mustload.go:65] Loading cluster: newest-cni-366970
	I1026 08:32:40.121558  286915 config.go:182] Loaded profile config "newest-cni-366970": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:40.121948  286915 cli_runner.go:164] Run: docker container inspect newest-cni-366970 --format={{.State.Status}}
	I1026 08:32:40.141613  286915 host.go:66] Checking if "newest-cni-366970" exists ...
	I1026 08:32:40.141923  286915 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:40.208521  286915 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:78 OomKillDisable:false NGoroutines:87 SystemTime:2025-10-26 08:32:40.197598543 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:40.209429  286915 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-366970 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 08:32:40.211595  286915 out.go:179] * Pausing node newest-cni-366970 ... 
	I1026 08:32:40.213097  286915 host.go:66] Checking if "newest-cni-366970" exists ...
	I1026 08:32:40.213384  286915 ssh_runner.go:195] Run: systemctl --version
	I1026 08:32:40.213433  286915 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:40.232902  286915 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:40.337894  286915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:32:40.355230  286915 pause.go:52] kubelet running: true
	I1026 08:32:40.355429  286915 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:32:40.532499  286915 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:32:40.532593  286915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:32:40.656208  286915 cri.go:89] found id: "8941a15ad64a64074d88dc093f84e213784eedc350ada1c4e023e23c2b0032b1"
	I1026 08:32:40.656237  286915 cri.go:89] found id: "c8e42df1bc950e2d5035d1c8a56b4b29b240783d1d4f82bcad1e5aacff23eb95"
	I1026 08:32:40.656243  286915 cri.go:89] found id: "8f559cba054d71194e31a5c83b5d8755f85d3467b2fa95a0880b14a6afa70a96"
	I1026 08:32:40.656389  286915 cri.go:89] found id: "97e8121d14be888ab9f5c7873f3f38cf64b1665f8eea11b6797f4ccc255027f4"
	I1026 08:32:40.656398  286915 cri.go:89] found id: "88ac6a66e7ed40f23b0bd951138082211c3af281b97a6d0c8e0e4286a236e5cb"
	I1026 08:32:40.656403  286915 cri.go:89] found id: "e66c30c5351971b039b6f8b1a2490e148427155e2d95d45ad45c4f17d5cf00c4"
	I1026 08:32:40.656407  286915 cri.go:89] found id: ""
	I1026 08:32:40.656502  286915 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:32:40.677492  286915 retry.go:31] will retry after 262.53271ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:32:40Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:32:40.941079  286915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:32:40.954217  286915 pause.go:52] kubelet running: false
	I1026 08:32:40.954321  286915 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:32:41.101890  286915 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:32:41.101980  286915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:32:41.182240  286915 cri.go:89] found id: "8941a15ad64a64074d88dc093f84e213784eedc350ada1c4e023e23c2b0032b1"
	I1026 08:32:41.182285  286915 cri.go:89] found id: "c8e42df1bc950e2d5035d1c8a56b4b29b240783d1d4f82bcad1e5aacff23eb95"
	I1026 08:32:41.182292  286915 cri.go:89] found id: "8f559cba054d71194e31a5c83b5d8755f85d3467b2fa95a0880b14a6afa70a96"
	I1026 08:32:41.182304  286915 cri.go:89] found id: "97e8121d14be888ab9f5c7873f3f38cf64b1665f8eea11b6797f4ccc255027f4"
	I1026 08:32:41.182309  286915 cri.go:89] found id: "88ac6a66e7ed40f23b0bd951138082211c3af281b97a6d0c8e0e4286a236e5cb"
	I1026 08:32:41.182313  286915 cri.go:89] found id: "e66c30c5351971b039b6f8b1a2490e148427155e2d95d45ad45c4f17d5cf00c4"
	I1026 08:32:41.182318  286915 cri.go:89] found id: ""
	I1026 08:32:41.182372  286915 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:32:41.197733  286915 retry.go:31] will retry after 373.811754ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:32:41Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:32:41.572389  286915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:32:41.590632  286915 pause.go:52] kubelet running: false
	I1026 08:32:41.590698  286915 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:32:41.786317  286915 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:32:41.786400  286915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:32:41.887201  286915 cri.go:89] found id: "8941a15ad64a64074d88dc093f84e213784eedc350ada1c4e023e23c2b0032b1"
	I1026 08:32:41.887220  286915 cri.go:89] found id: "c8e42df1bc950e2d5035d1c8a56b4b29b240783d1d4f82bcad1e5aacff23eb95"
	I1026 08:32:41.887226  286915 cri.go:89] found id: "8f559cba054d71194e31a5c83b5d8755f85d3467b2fa95a0880b14a6afa70a96"
	I1026 08:32:41.887231  286915 cri.go:89] found id: "97e8121d14be888ab9f5c7873f3f38cf64b1665f8eea11b6797f4ccc255027f4"
	I1026 08:32:41.887235  286915 cri.go:89] found id: "88ac6a66e7ed40f23b0bd951138082211c3af281b97a6d0c8e0e4286a236e5cb"
	I1026 08:32:41.887239  286915 cri.go:89] found id: "e66c30c5351971b039b6f8b1a2490e148427155e2d95d45ad45c4f17d5cf00c4"
	I1026 08:32:41.887243  286915 cri.go:89] found id: ""
	I1026 08:32:41.887313  286915 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:32:41.905365  286915 retry.go:31] will retry after 288.017338ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:32:41Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:32:42.193853  286915 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:32:42.211756  286915 pause.go:52] kubelet running: false
	I1026 08:32:42.211820  286915 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:32:42.377108  286915 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:32:42.377192  286915 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:32:42.455308  286915 cri.go:89] found id: "8941a15ad64a64074d88dc093f84e213784eedc350ada1c4e023e23c2b0032b1"
	I1026 08:32:42.455339  286915 cri.go:89] found id: "c8e42df1bc950e2d5035d1c8a56b4b29b240783d1d4f82bcad1e5aacff23eb95"
	I1026 08:32:42.455346  286915 cri.go:89] found id: "8f559cba054d71194e31a5c83b5d8755f85d3467b2fa95a0880b14a6afa70a96"
	I1026 08:32:42.455351  286915 cri.go:89] found id: "97e8121d14be888ab9f5c7873f3f38cf64b1665f8eea11b6797f4ccc255027f4"
	I1026 08:32:42.455355  286915 cri.go:89] found id: "88ac6a66e7ed40f23b0bd951138082211c3af281b97a6d0c8e0e4286a236e5cb"
	I1026 08:32:42.455360  286915 cri.go:89] found id: "e66c30c5351971b039b6f8b1a2490e148427155e2d95d45ad45c4f17d5cf00c4"
	I1026 08:32:42.455364  286915 cri.go:89] found id: ""
	I1026 08:32:42.455410  286915 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:32:42.472686  286915 out.go:203] 
	W1026 08:32:42.474621  286915 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:32:42Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:32:42Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:32:42.474640  286915 out.go:285] * 
	* 
	W1026 08:32:42.483666  286915 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:32:42.485590  286915 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-366970 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-366970
helpers_test.go:243: (dbg) docker inspect newest-cni-366970:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018",
	        "Created": "2025-10-26T08:31:59.079010399Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283968,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:32:28.565849462Z",
	            "FinishedAt": "2025-10-26T08:32:27.66259687Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018/hostname",
	        "HostsPath": "/var/lib/docker/containers/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018/hosts",
	        "LogPath": "/var/lib/docker/containers/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018-json.log",
	        "Name": "/newest-cni-366970",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-366970:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-366970",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018",
	                "LowerDir": "/var/lib/docker/overlay2/aea0a5ed2ad3415011b41f9205844db626d056ea7edf0ff835d03501b925eccd-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aea0a5ed2ad3415011b41f9205844db626d056ea7edf0ff835d03501b925eccd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aea0a5ed2ad3415011b41f9205844db626d056ea7edf0ff835d03501b925eccd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aea0a5ed2ad3415011b41f9205844db626d056ea7edf0ff835d03501b925eccd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-366970",
	                "Source": "/var/lib/docker/volumes/newest-cni-366970/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-366970",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-366970",
	                "name.minikube.sigs.k8s.io": "newest-cni-366970",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb07d817ef11ad89bdba249a87b7cb3a2a2befa351f5a884957e1103b33cc7f2",
	            "SandboxKey": "/var/run/docker/netns/eb07d817ef11",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-366970": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:ec:b7:01:e5:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19ada62bb6d68780491bac6cfa6c8306dbe7ffb9866d24de190e8d5c662067df",
	                    "EndpointID": "4ea30addc71cebec20f0728fdc35c1b51ef3d079a91cd722086268600ebab0ae",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-366970",
	                        "c16db157b89e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-366970 -n newest-cni-366970
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-366970 -n newest-cni-366970: exit status 2 (396.582515ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-366970 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-366970 logs -n 25: (1.028186635s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ image   │ no-preload-001983 image list --format=json                                                                                                                                                                                                    │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p no-preload-001983 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-462840                                                                                                                                                                                                                  │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p no-preload-001983                                                                                                                                                                                                                          │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p no-preload-001983                                                                                                                                                                                                                          │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p auto-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ image   │ embed-certs-752315 image list --format=json                                                                                                                                                                                                   │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ pause   │ -p embed-certs-752315 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ delete  │ -p embed-certs-752315                                                                                                                                                                                                                         │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p embed-certs-752315                                                                                                                                                                                                                         │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p kindnet-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-110992               │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-866212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-866212 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ addons  │ enable metrics-server -p newest-cni-366970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ stop    │ -p newest-cni-366970 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ addons  │ enable dashboard -p newest-cni-366970 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-866212 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-866212 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ image   │ newest-cni-366970 image list --format=json                                                                                                                                                                                                    │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ pause   │ -p newest-cni-366970 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ ssh     │ -p auto-110992 pgrep -a kubelet                                                                                                                                                                                                               │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:32:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:32:36.287815  285842 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:32:36.288166  285842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:36.288195  285842 out.go:374] Setting ErrFile to fd 2...
	I1026 08:32:36.288211  285842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:36.288583  285842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:32:36.289128  285842 out.go:368] Setting JSON to false
	I1026 08:32:36.290727  285842 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4507,"bootTime":1761463049,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:32:36.290879  285842 start.go:141] virtualization: kvm guest
	I1026 08:32:36.294076  285842 out.go:179] * [default-k8s-diff-port-866212] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:32:36.295341  285842 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:32:36.295379  285842 notify.go:220] Checking for updates...
	I1026 08:32:36.297602  285842 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:32:36.298732  285842 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:32:36.299959  285842 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:32:36.302162  285842 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:32:36.303428  285842 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:32:36.305362  285842 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:36.306095  285842 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:32:36.348024  285842 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:32:36.348131  285842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:36.454780  285842 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 08:32:36.43857477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:36.454959  285842 docker.go:318] overlay module found
	I1026 08:32:36.457631  285842 out.go:179] * Using the docker driver based on existing profile
	I1026 08:32:36.458802  285842 start.go:305] selected driver: docker
	I1026 08:32:36.458817  285842 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-866212 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-866212 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:32:36.458926  285842 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:32:36.459826  285842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:36.567025  285842 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 08:32:36.543936452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:36.568652  285842 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:32:36.569656  285842 cni.go:84] Creating CNI manager for ""
	I1026 08:32:36.569733  285842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:32:36.569808  285842 start.go:349] cluster config:
	{Name:default-k8s-diff-port-866212 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-866212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:32:36.571806  285842 out.go:179] * Starting "default-k8s-diff-port-866212" primary control-plane node in "default-k8s-diff-port-866212" cluster
	I1026 08:32:36.572919  285842 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:32:36.574129  285842 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:32:33.235810  278592 out.go:252]   - Booting up control plane ...
	I1026 08:32:33.235965  278592 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 08:32:33.236088  278592 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 08:32:33.236940  278592 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 08:32:33.253962  278592 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 08:32:33.254175  278592 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 08:32:33.262161  278592 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 08:32:33.262368  278592 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 08:32:33.262415  278592 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 08:32:33.371376  278592 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 08:32:33.371587  278592 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 08:32:34.372079  278592 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000956238s
	I1026 08:32:34.376717  278592 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 08:32:34.376823  278592 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1026 08:32:34.376965  278592 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 08:32:34.377045  278592 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 08:32:36.575306  285842 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:36.575361  285842 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:32:36.575370  285842 cache.go:58] Caching tarball of preloaded images
	I1026 08:32:36.575455  285842 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:32:36.575477  285842 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:32:36.575489  285842 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:32:36.575608  285842 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/default-k8s-diff-port-866212/config.json ...
	I1026 08:32:36.603818  285842 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:32:36.603841  285842 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:32:36.603856  285842 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:32:36.603884  285842 start.go:360] acquireMachinesLock for default-k8s-diff-port-866212: {Name:mk3a220b332ac4d01b8cbea8443619f058df29a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:32:36.603935  285842 start.go:364] duration metric: took 31.938µs to acquireMachinesLock for "default-k8s-diff-port-866212"
	I1026 08:32:36.603955  285842 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:32:36.603962  285842 fix.go:54] fixHost starting: 
	I1026 08:32:36.604278  285842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-866212 --format={{.State.Status}}
	I1026 08:32:36.630357  285842 fix.go:112] recreateIfNeeded on default-k8s-diff-port-866212: state=Stopped err=<nil>
	W1026 08:32:36.630391  285842 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:32:35.609968  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 08:32:35.610025  283772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 08:32:35.610100  283772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:35.646385  283772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:35.649239  283772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:35.655428  283772 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:32:35.655501  283772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:32:35.655613  283772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:35.692359  283772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:35.801856  283772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:32:35.820615  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 08:32:35.820652  283772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 08:32:35.828877  283772 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:32:35.828944  283772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:32:35.855091  283772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:32:35.857328  283772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:32:35.865028  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 08:32:35.865051  283772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 08:32:35.871835  283772 api_server.go:72] duration metric: took 318.785719ms to wait for apiserver process to appear ...
	I1026 08:32:35.871860  283772 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:32:35.871877  283772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:35.903287  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 08:32:35.903311  283772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 08:32:35.949219  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 08:32:35.949258  283772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 08:32:35.983656  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 08:32:35.983681  283772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 08:32:36.017005  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 08:32:36.017033  283772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 08:32:36.040091  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 08:32:36.040115  283772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 08:32:36.059886  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 08:32:36.059918  283772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 08:32:36.079888  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 08:32:36.079912  283772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 08:32:36.102184  283772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 08:32:37.685740  283772 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 08:32:37.685769  283772 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 08:32:37.685786  283772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:37.694061  283772 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 08:32:37.694095  283772 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 08:32:37.766631  283772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.911429313s)
	I1026 08:32:37.872813  283772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:37.883698  283772 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:32:37.883723  283772 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:32:38.302538  283772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.445094432s)
	I1026 08:32:38.302672  283772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.200437641s)
	I1026 08:32:38.304280  283772 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-366970 addons enable metrics-server
	
	I1026 08:32:38.305629  283772 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1026 08:32:38.307171  283772 addons.go:514] duration metric: took 2.753568518s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1026 08:32:36.883431  278592 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.506594538s
	I1026 08:32:37.219481  278592 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.842605296s
	I1026 08:32:38.878516  278592 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.50172392s
	I1026 08:32:38.890900  278592 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:32:38.901734  278592 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:32:38.911463  278592 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:32:38.911709  278592 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-110992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:32:38.919399  278592 kubeadm.go:318] [bootstrap-token] Using token: 3wo4un.gmsrrfm9ihz27mks
	I1026 08:32:38.372839  283772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:38.377619  283772 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:32:38.377648  283772 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:32:38.872268  283772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:38.876719  283772 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:32:38.876757  283772 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:32:39.372348  283772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:39.376531  283772 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 08:32:39.377607  283772 api_server.go:141] control plane version: v1.34.1
	I1026 08:32:39.377635  283772 api_server.go:131] duration metric: took 3.505767136s to wait for apiserver health ...
	I1026 08:32:39.377646  283772 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:32:39.381606  283772 system_pods.go:59] 8 kube-system pods found
	I1026 08:32:39.381654  283772 system_pods.go:61] "coredns-66bc5c9577-9xk4x" [4d2bf056-0455-412c-ab4c-5c5680aff306] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 08:32:39.381680  283772 system_pods.go:61] "etcd-newest-cni-366970" [5879f65b-4bc9-45bb-b7ea-97a3f98a0854] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:32:39.381696  283772 system_pods.go:61] "kindnet-vzchv" [1a35b08e-08fd-4546-b4c0-79f6e3f3f29b] Running
	I1026 08:32:39.381705  283772 system_pods.go:61] "kube-apiserver-newest-cni-366970" [6a35c9e5-f940-4ed4-844c-6a1314e1a01d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:32:39.381715  283772 system_pods.go:61] "kube-controller-manager-newest-cni-366970" [e32bbffb-6e52-422f-aedf-a15bd47f2e98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:32:39.381722  283772 system_pods.go:61] "kube-proxy-t2z7c" [73aa16de-9d34-4a0f-9c14-8ec0306d69f6] Running
	I1026 08:32:39.381733  283772 system_pods.go:61] "kube-scheduler-newest-cni-366970" [ad5a05b4-584f-4bd2-9f5b-1635269c14d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:32:39.381744  283772 system_pods.go:61] "storage-provisioner" [1f9d7ffb-20ea-4a1f-a5c0-7b8b0ab3e7b0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 08:32:39.381752  283772 system_pods.go:74] duration metric: took 4.100255ms to wait for pod list to return data ...
	I1026 08:32:39.381766  283772 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:32:39.384046  283772 default_sa.go:45] found service account: "default"
	I1026 08:32:39.384064  283772 default_sa.go:55] duration metric: took 2.291738ms for default service account to be created ...
	I1026 08:32:39.384074  283772 kubeadm.go:586] duration metric: took 3.831030033s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 08:32:39.384087  283772 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:32:39.386377  283772 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:32:39.386398  283772 node_conditions.go:123] node cpu capacity is 8
	I1026 08:32:39.386410  283772 node_conditions.go:105] duration metric: took 2.319581ms to run NodePressure ...
	I1026 08:32:39.386419  283772 start.go:241] waiting for startup goroutines ...
	I1026 08:32:39.386425  283772 start.go:246] waiting for cluster config update ...
	I1026 08:32:39.386437  283772 start.go:255] writing updated cluster config ...
	I1026 08:32:39.386676  283772 ssh_runner.go:195] Run: rm -f paused
	I1026 08:32:39.440766  283772 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:32:39.442552  283772 out.go:179] * Done! kubectl is now configured to use "newest-cni-366970" cluster and "default" namespace by default
	W1026 08:32:35.937383  273227 node_ready.go:57] node "auto-110992" has "Ready":"False" status (will retry)
	I1026 08:32:37.936631  273227 node_ready.go:49] node "auto-110992" is "Ready"
	I1026 08:32:37.936660  273227 node_ready.go:38] duration metric: took 11.003540663s for node "auto-110992" to be "Ready" ...
	I1026 08:32:37.936679  273227 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:32:37.936724  273227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:32:37.957233  273227 api_server.go:72] duration metric: took 11.362600646s to wait for apiserver process to appear ...
	I1026 08:32:37.957287  273227 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:32:37.957309  273227 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 08:32:37.964967  273227 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 08:32:37.966140  273227 api_server.go:141] control plane version: v1.34.1
	I1026 08:32:37.966167  273227 api_server.go:131] duration metric: took 8.873333ms to wait for apiserver health ...
	I1026 08:32:37.966177  273227 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:32:37.976394  273227 system_pods.go:59] 8 kube-system pods found
	I1026 08:32:37.976476  273227 system_pods.go:61] "coredns-66bc5c9577-bdpf4" [73ee3c9d-3bdc-4511-8d23-6bc2465b3399] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:37.976494  273227 system_pods.go:61] "etcd-auto-110992" [9fc84efb-0ce7-4457-8265-f98d2842bffd] Running
	I1026 08:32:37.976508  273227 system_pods.go:61] "kindnet-clhsc" [265fe4e3-0c57-43e2-bfa9-afc141339e2a] Running
	I1026 08:32:37.976513  273227 system_pods.go:61] "kube-apiserver-auto-110992" [5ddbcb79-ca5f-49bb-aa41-7be49b985229] Running
	I1026 08:32:37.976519  273227 system_pods.go:61] "kube-controller-manager-auto-110992" [916969f2-17d5-4301-b673-b520bf8e7437] Running
	I1026 08:32:37.976528  273227 system_pods.go:61] "kube-proxy-7rts2" [05ccaf83-b5fc-4e73-99f1-1811a378c24f] Running
	I1026 08:32:37.976533  273227 system_pods.go:61] "kube-scheduler-auto-110992" [22246d33-a982-4a7f-8c81-d8c13fdc7cbe] Running
	I1026 08:32:37.976543  273227 system_pods.go:61] "storage-provisioner" [aceb04e4-cc14-41e9-80c5-2ff100e79f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:37.976551  273227 system_pods.go:74] duration metric: took 10.366956ms to wait for pod list to return data ...
	I1026 08:32:37.976573  273227 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:32:37.979360  273227 default_sa.go:45] found service account: "default"
	I1026 08:32:37.979385  273227 default_sa.go:55] duration metric: took 2.802954ms for default service account to be created ...
	I1026 08:32:37.979396  273227 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:32:37.982495  273227 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:37.982532  273227 system_pods.go:89] "coredns-66bc5c9577-bdpf4" [73ee3c9d-3bdc-4511-8d23-6bc2465b3399] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:37.982541  273227 system_pods.go:89] "etcd-auto-110992" [9fc84efb-0ce7-4457-8265-f98d2842bffd] Running
	I1026 08:32:37.982548  273227 system_pods.go:89] "kindnet-clhsc" [265fe4e3-0c57-43e2-bfa9-afc141339e2a] Running
	I1026 08:32:37.982554  273227 system_pods.go:89] "kube-apiserver-auto-110992" [5ddbcb79-ca5f-49bb-aa41-7be49b985229] Running
	I1026 08:32:37.982565  273227 system_pods.go:89] "kube-controller-manager-auto-110992" [916969f2-17d5-4301-b673-b520bf8e7437] Running
	I1026 08:32:37.982571  273227 system_pods.go:89] "kube-proxy-7rts2" [05ccaf83-b5fc-4e73-99f1-1811a378c24f] Running
	I1026 08:32:37.982576  273227 system_pods.go:89] "kube-scheduler-auto-110992" [22246d33-a982-4a7f-8c81-d8c13fdc7cbe] Running
	I1026 08:32:37.982584  273227 system_pods.go:89] "storage-provisioner" [aceb04e4-cc14-41e9-80c5-2ff100e79f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:37.982610  273227 retry.go:31] will retry after 197.407798ms: missing components: kube-dns
	I1026 08:32:38.185817  273227 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:38.185863  273227 system_pods.go:89] "coredns-66bc5c9577-bdpf4" [73ee3c9d-3bdc-4511-8d23-6bc2465b3399] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:38.185873  273227 system_pods.go:89] "etcd-auto-110992" [9fc84efb-0ce7-4457-8265-f98d2842bffd] Running
	I1026 08:32:38.185880  273227 system_pods.go:89] "kindnet-clhsc" [265fe4e3-0c57-43e2-bfa9-afc141339e2a] Running
	I1026 08:32:38.185885  273227 system_pods.go:89] "kube-apiserver-auto-110992" [5ddbcb79-ca5f-49bb-aa41-7be49b985229] Running
	I1026 08:32:38.185890  273227 system_pods.go:89] "kube-controller-manager-auto-110992" [916969f2-17d5-4301-b673-b520bf8e7437] Running
	I1026 08:32:38.185897  273227 system_pods.go:89] "kube-proxy-7rts2" [05ccaf83-b5fc-4e73-99f1-1811a378c24f] Running
	I1026 08:32:38.185902  273227 system_pods.go:89] "kube-scheduler-auto-110992" [22246d33-a982-4a7f-8c81-d8c13fdc7cbe] Running
	I1026 08:32:38.185908  273227 system_pods.go:89] "storage-provisioner" [aceb04e4-cc14-41e9-80c5-2ff100e79f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:38.185926  273227 retry.go:31] will retry after 350.706535ms: missing components: kube-dns
	I1026 08:32:38.541560  273227 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:38.541596  273227 system_pods.go:89] "coredns-66bc5c9577-bdpf4" [73ee3c9d-3bdc-4511-8d23-6bc2465b3399] Running
	I1026 08:32:38.541604  273227 system_pods.go:89] "etcd-auto-110992" [9fc84efb-0ce7-4457-8265-f98d2842bffd] Running
	I1026 08:32:38.541610  273227 system_pods.go:89] "kindnet-clhsc" [265fe4e3-0c57-43e2-bfa9-afc141339e2a] Running
	I1026 08:32:38.541614  273227 system_pods.go:89] "kube-apiserver-auto-110992" [5ddbcb79-ca5f-49bb-aa41-7be49b985229] Running
	I1026 08:32:38.541618  273227 system_pods.go:89] "kube-controller-manager-auto-110992" [916969f2-17d5-4301-b673-b520bf8e7437] Running
	I1026 08:32:38.541624  273227 system_pods.go:89] "kube-proxy-7rts2" [05ccaf83-b5fc-4e73-99f1-1811a378c24f] Running
	I1026 08:32:38.541629  273227 system_pods.go:89] "kube-scheduler-auto-110992" [22246d33-a982-4a7f-8c81-d8c13fdc7cbe] Running
	I1026 08:32:38.541634  273227 system_pods.go:89] "storage-provisioner" [aceb04e4-cc14-41e9-80c5-2ff100e79f19] Running
	I1026 08:32:38.541644  273227 system_pods.go:126] duration metric: took 562.241317ms to wait for k8s-apps to be running ...
	I1026 08:32:38.541658  273227 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:32:38.541705  273227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:32:38.557718  273227 system_svc.go:56] duration metric: took 16.052106ms WaitForService to wait for kubelet
	I1026 08:32:38.557752  273227 kubeadm.go:586] duration metric: took 11.96312275s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:32:38.557777  273227 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:32:38.561304  273227 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:32:38.561335  273227 node_conditions.go:123] node cpu capacity is 8
	I1026 08:32:38.561354  273227 node_conditions.go:105] duration metric: took 3.571184ms to run NodePressure ...
	I1026 08:32:38.561370  273227 start.go:241] waiting for startup goroutines ...
	I1026 08:32:38.561380  273227 start.go:246] waiting for cluster config update ...
	I1026 08:32:38.561398  273227 start.go:255] writing updated cluster config ...
	I1026 08:32:38.561703  273227 ssh_runner.go:195] Run: rm -f paused
	I1026 08:32:38.566537  273227 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:32:38.570905  273227 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bdpf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.576440  273227 pod_ready.go:94] pod "coredns-66bc5c9577-bdpf4" is "Ready"
	I1026 08:32:38.576469  273227 pod_ready.go:86] duration metric: took 5.537642ms for pod "coredns-66bc5c9577-bdpf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.579025  273227 pod_ready.go:83] waiting for pod "etcd-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.583532  273227 pod_ready.go:94] pod "etcd-auto-110992" is "Ready"
	I1026 08:32:38.583556  273227 pod_ready.go:86] duration metric: took 4.499354ms for pod "etcd-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.585793  273227 pod_ready.go:83] waiting for pod "kube-apiserver-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.590542  273227 pod_ready.go:94] pod "kube-apiserver-auto-110992" is "Ready"
	I1026 08:32:38.590565  273227 pod_ready.go:86] duration metric: took 4.751436ms for pod "kube-apiserver-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.592769  273227 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.972312  273227 pod_ready.go:94] pod "kube-controller-manager-auto-110992" is "Ready"
	I1026 08:32:38.972340  273227 pod_ready.go:86] duration metric: took 379.550394ms for pod "kube-controller-manager-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:39.171975  273227 pod_ready.go:83] waiting for pod "kube-proxy-7rts2" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:39.571227  273227 pod_ready.go:94] pod "kube-proxy-7rts2" is "Ready"
	I1026 08:32:39.571273  273227 pod_ready.go:86] duration metric: took 399.27282ms for pod "kube-proxy-7rts2" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:39.772459  273227 pod_ready.go:83] waiting for pod "kube-scheduler-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:40.171958  273227 pod_ready.go:94] pod "kube-scheduler-auto-110992" is "Ready"
	I1026 08:32:40.171987  273227 pod_ready.go:86] duration metric: took 399.502181ms for pod "kube-scheduler-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:40.172003  273227 pod_ready.go:40] duration metric: took 1.605432017s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:32:40.228208  273227 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:32:40.230124  273227 out.go:179] * Done! kubectl is now configured to use "auto-110992" cluster and "default" namespace by default
	I1026 08:32:38.920866  278592 out.go:252]   - Configuring RBAC rules ...
	I1026 08:32:38.921026  278592 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 08:32:38.924535  278592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 08:32:38.931606  278592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 08:32:38.934544  278592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 08:32:38.937186  278592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 08:32:38.939690  278592 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 08:32:39.284618  278592 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 08:32:39.707874  278592 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 08:32:40.285587  278592 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 08:32:40.287147  278592 kubeadm.go:318] 
	I1026 08:32:40.287273  278592 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 08:32:40.287303  278592 kubeadm.go:318] 
	I1026 08:32:40.287397  278592 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 08:32:40.287406  278592 kubeadm.go:318] 
	I1026 08:32:40.287426  278592 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 08:32:40.287478  278592 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 08:32:40.287573  278592 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 08:32:40.287588  278592 kubeadm.go:318] 
	I1026 08:32:40.287662  278592 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 08:32:40.287673  278592 kubeadm.go:318] 
	I1026 08:32:40.287749  278592 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 08:32:40.287765  278592 kubeadm.go:318] 
	I1026 08:32:40.287807  278592 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 08:32:40.287903  278592 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 08:32:40.287990  278592 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 08:32:40.288003  278592 kubeadm.go:318] 
	I1026 08:32:40.288105  278592 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 08:32:40.288201  278592 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 08:32:40.288218  278592 kubeadm.go:318] 
	I1026 08:32:40.288356  278592 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 3wo4un.gmsrrfm9ihz27mks \
	I1026 08:32:40.288500  278592 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d \
	I1026 08:32:40.288578  278592 kubeadm.go:318] 	--control-plane 
	I1026 08:32:40.288588  278592 kubeadm.go:318] 
	I1026 08:32:40.288752  278592 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 08:32:40.288764  278592 kubeadm.go:318] 
	I1026 08:32:40.288866  278592 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 3wo4un.gmsrrfm9ihz27mks \
	I1026 08:32:40.288993  278592 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d 
	I1026 08:32:40.291430  278592 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 08:32:40.291581  278592 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 08:32:40.291614  278592 cni.go:84] Creating CNI manager for "kindnet"
	I1026 08:32:40.297795  278592 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 08:32:36.632016  285842 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-866212" ...
	I1026 08:32:36.632082  285842 cli_runner.go:164] Run: docker start default-k8s-diff-port-866212
	I1026 08:32:37.007492  285842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-866212 --format={{.State.Status}}
	I1026 08:32:37.039927  285842 kic.go:430] container "default-k8s-diff-port-866212" state is running.
	I1026 08:32:37.040589  285842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-866212
	I1026 08:32:37.077473  285842 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/default-k8s-diff-port-866212/config.json ...
	I1026 08:32:37.077782  285842 machine.go:93] provisionDockerMachine start ...
	I1026 08:32:37.077878  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:37.104666  285842 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:37.104993  285842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1026 08:32:37.105019  285842 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:32:37.105825  285842 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 08:32:40.264381  285842 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-866212
	
	I1026 08:32:40.264410  285842 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-866212"
	I1026 08:32:40.264469  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:40.288221  285842 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:40.289706  285842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1026 08:32:40.289733  285842 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-866212 && echo "default-k8s-diff-port-866212" | sudo tee /etc/hostname
	I1026 08:32:40.465761  285842 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-866212
	
	I1026 08:32:40.465857  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:40.488600  285842 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:40.488930  285842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1026 08:32:40.488987  285842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-866212' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-866212/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-866212' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:32:40.647729  285842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:32:40.647766  285842 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:32:40.647816  285842 ubuntu.go:190] setting up certificates
	I1026 08:32:40.647828  285842 provision.go:84] configureAuth start
	I1026 08:32:40.647890  285842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-866212
	I1026 08:32:40.678789  285842 provision.go:143] copyHostCerts
	I1026 08:32:40.678846  285842 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:32:40.678856  285842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:32:40.678917  285842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:32:40.679045  285842 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:32:40.679051  285842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:32:40.679080  285842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:32:40.679142  285842 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:32:40.679146  285842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:32:40.679169  285842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:32:40.679227  285842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-866212 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-866212 localhost minikube]
	I1026 08:32:41.070782  285842 provision.go:177] copyRemoteCerts
	I1026 08:32:41.070853  285842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:32:41.070899  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:41.093330  285842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/default-k8s-diff-port-866212/id_rsa Username:docker}
	I1026 08:32:41.203075  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:32:41.222522  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1026 08:32:41.240452  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 08:32:41.259597  285842 provision.go:87] duration metric: took 611.75519ms to configureAuth
	I1026 08:32:41.259626  285842 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:32:41.259833  285842 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:41.259956  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:41.281776  285842 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:41.282003  285842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1026 08:32:41.282020  285842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:32:40.299456  278592 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 08:32:40.304229  278592 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 08:32:40.304294  278592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 08:32:40.319470  278592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 08:32:40.620190  278592 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 08:32:40.620365  278592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:40.620474  278592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-110992 minikube.k8s.io/updated_at=2025_10_26T08_32_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=kindnet-110992 minikube.k8s.io/primary=true
	I1026 08:32:40.722153  278592 ops.go:34] apiserver oom_adj: -16
	I1026 08:32:40.722314  278592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:41.222386  278592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	
	==> CRI-O <==
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.039386726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.040153472Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bcabb930-dac9-4cd2-9b7f-3f87337edb05 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.043711516Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.044607366Z" level=info msg="Ran pod sandbox 8ddb01627d910be04d2b68b43410388400cee1814fd7792289ecfa0776a7a51e with infra container: kube-system/kube-proxy-t2z7c/POD" id=bcabb930-dac9-4cd2-9b7f-3f87337edb05 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.045189274Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cec2247a-5e0a-4747-a9ec-91f73c74e3e0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.046459168Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d964555f-0dcd-4d07-a823-6a149320f97c name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.046991834Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.048075546Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d0ed0ee3-f33c-4a52-badf-7c353d7e674d name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.049199443Z" level=info msg="Creating container: kube-system/kube-proxy-t2z7c/kube-proxy" id=c4f6e5c7-cf53-49ae-b4e0-cd23676b2470 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.04937873Z" level=info msg="Ran pod sandbox 466d5695298eb6b0c9b05683e1e3b63dc660bb0c9bd8bbb4241caab7304235a2 with infra container: kube-system/kindnet-vzchv/POD" id=cec2247a-5e0a-4747-a9ec-91f73c74e3e0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.049386881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.050710876Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0d74784e-aea4-49dc-88a9-134dbb0a8a2e name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.052933886Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8fc761bb-0cb7-491a-a9ec-db6e63df191a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.054343883Z" level=info msg="Creating container: kube-system/kindnet-vzchv/kindnet-cni" id=37999dd9-9d88-4a16-8c4f-d27b5ae78131 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.05445268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.054469751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.055017831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.059738386Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.06012808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.091498969Z" level=info msg="Created container 8941a15ad64a64074d88dc093f84e213784eedc350ada1c4e023e23c2b0032b1: kube-system/kindnet-vzchv/kindnet-cni" id=37999dd9-9d88-4a16-8c4f-d27b5ae78131 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.092288929Z" level=info msg="Starting container: 8941a15ad64a64074d88dc093f84e213784eedc350ada1c4e023e23c2b0032b1" id=5c594ebe-3167-4a66-a465-f136901bea02 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.094348699Z" level=info msg="Started container" PID=1041 containerID=8941a15ad64a64074d88dc093f84e213784eedc350ada1c4e023e23c2b0032b1 description=kube-system/kindnet-vzchv/kindnet-cni id=5c594ebe-3167-4a66-a465-f136901bea02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=466d5695298eb6b0c9b05683e1e3b63dc660bb0c9bd8bbb4241caab7304235a2
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.095813806Z" level=info msg="Created container c8e42df1bc950e2d5035d1c8a56b4b29b240783d1d4f82bcad1e5aacff23eb95: kube-system/kube-proxy-t2z7c/kube-proxy" id=c4f6e5c7-cf53-49ae-b4e0-cd23676b2470 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.096632531Z" level=info msg="Starting container: c8e42df1bc950e2d5035d1c8a56b4b29b240783d1d4f82bcad1e5aacff23eb95" id=7ef4ef8f-1000-4481-a0bb-99b82ca51cc6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.100188148Z" level=info msg="Started container" PID=1040 containerID=c8e42df1bc950e2d5035d1c8a56b4b29b240783d1d4f82bcad1e5aacff23eb95 description=kube-system/kube-proxy-t2z7c/kube-proxy id=7ef4ef8f-1000-4481-a0bb-99b82ca51cc6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ddb01627d910be04d2b68b43410388400cee1814fd7792289ecfa0776a7a51e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8941a15ad64a6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   466d5695298eb       kindnet-vzchv                               kube-system
	c8e42df1bc950       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   8ddb01627d910       kube-proxy-t2z7c                            kube-system
	8f559cba054d7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   56583daa78461       kube-apiserver-newest-cni-366970            kube-system
	97e8121d14be8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   f5fdbefb7bca6       kube-scheduler-newest-cni-366970            kube-system
	88ac6a66e7ed4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   3baa5ec02076f       kube-controller-manager-newest-cni-366970   kube-system
	e66c30c535197       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   e952709622848       etcd-newest-cni-366970                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-366970
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-366970
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=newest-cni-366970
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_32_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:32:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-366970
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:32:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:32:37 +0000   Sun, 26 Oct 2025 08:32:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:32:37 +0000   Sun, 26 Oct 2025 08:32:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:32:37 +0000   Sun, 26 Oct 2025 08:32:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 26 Oct 2025 08:32:37 +0000   Sun, 26 Oct 2025 08:32:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-366970
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                6a456af2-76d6-4f3f-b16f-fdf9a4915e23
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-366970                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         28s
	  kube-system                 kindnet-vzchv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      22s
	  kube-system                 kube-apiserver-newest-cni-366970             250m (3%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-controller-manager-newest-cni-366970    200m (2%)     0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-proxy-t2z7c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         22s
	  kube-system                 kube-scheduler-newest-cni-366970             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    32s (x8 over 32s)  kubelet          Node newest-cni-366970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s (x8 over 32s)  kubelet          Node newest-cni-366970 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  32s (x8 over 32s)  kubelet          Node newest-cni-366970 status is now: NodeHasSufficientMemory
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s                kubelet          Node newest-cni-366970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s                kubelet          Node newest-cni-366970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s                kubelet          Node newest-cni-366970 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s                node-controller  Node newest-cni-366970 event: Registered Node newest-cni-366970 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node newest-cni-366970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node newest-cni-366970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 9s)    kubelet          Node newest-cni-366970 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-366970 event: Registered Node newest-cni-366970 in Controller
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [e66c30c5351971b039b6f8b1a2490e148427155e2d95d45ad45c4f17d5cf00c4] <==
	{"level":"warn","ts":"2025-10-26T08:32:36.801477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.824734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.841045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.862759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.880624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.890273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.898264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.906328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.914029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.922342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.932365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.940388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.949420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.969441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.979379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.987076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.994976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.003900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.017450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.021297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.033203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.050492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.074192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.082791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.158580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45726","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:32:43 up  1:15,  0 user,  load average: 6.89, 4.12, 2.44
	Linux newest-cni-366970 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8941a15ad64a64074d88dc093f84e213784eedc350ada1c4e023e23c2b0032b1] <==
	I1026 08:32:38.298858       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:32:38.299129       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 08:32:38.299287       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:32:38.299355       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:32:38.299372       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:32:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:32:38.677692       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:32:38.677843       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:32:38.677880       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:32:38.694627       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:32:38.978431       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:32:38.978457       1 metrics.go:72] Registering metrics
	I1026 08:32:38.978507       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [8f559cba054d71194e31a5c83b5d8755f85d3467b2fa95a0880b14a6afa70a96] <==
	I1026 08:32:37.748647       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 08:32:37.749133       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 08:32:37.750459       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 08:32:37.750507       1 aggregator.go:171] initial CRD sync complete...
	I1026 08:32:37.750516       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 08:32:37.750523       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 08:32:37.750529       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:32:37.751001       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 08:32:37.761800       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 08:32:37.763104       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 08:32:37.763172       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:32:37.773482       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 08:32:37.773512       1 policy_source.go:240] refreshing policies
	I1026 08:32:37.788767       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:32:37.793192       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:32:38.069121       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 08:32:38.105152       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:32:38.130533       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:32:38.138909       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:32:38.200335       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.213.218"}
	I1026 08:32:38.217958       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.53.217"}
	I1026 08:32:38.646486       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:32:41.161534       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:32:41.413649       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:32:41.616919       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [88ac6a66e7ed40f23b0bd951138082211c3af281b97a6d0c8e0e4286a236e5cb] <==
	I1026 08:32:41.054588       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 08:32:41.054599       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 08:32:41.057382       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 08:32:41.057452       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 08:32:41.057454       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 08:32:41.057722       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 08:32:41.058656       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 08:32:41.058693       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 08:32:41.058698       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 08:32:41.058789       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:32:41.058889       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 08:32:41.059144       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 08:32:41.060624       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 08:32:41.061848       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 08:32:41.065099       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 08:32:41.065119       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 08:32:41.065129       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 08:32:41.065209       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 08:32:41.065210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 08:32:41.067405       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:32:41.069561       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 08:32:41.069655       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 08:32:41.069755       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-366970"
	I1026 08:32:41.069802       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 08:32:41.091882       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c8e42df1bc950e2d5035d1c8a56b4b29b240783d1d4f82bcad1e5aacff23eb95] <==
	I1026 08:32:38.140112       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:32:38.205044       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:32:38.305802       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:32:38.305841       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 08:32:38.305928       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:32:38.330084       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:32:38.330161       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:32:38.336350       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:32:38.336753       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:32:38.336791       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:32:38.338864       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:32:38.338888       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:32:38.338924       1 config.go:200] "Starting service config controller"
	I1026 08:32:38.338935       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:32:38.339106       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:32:38.339132       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:32:38.339713       1 config.go:309] "Starting node config controller"
	I1026 08:32:38.339747       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:32:38.339756       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:32:38.439089       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:32:38.439098       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 08:32:38.439783       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [97e8121d14be888ab9f5c7873f3f38cf64b1665f8eea11b6797f4ccc255027f4] <==
	I1026 08:32:35.847410       1 serving.go:386] Generated self-signed cert in-memory
	W1026 08:32:37.662885       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 08:32:37.662922       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 08:32:37.662934       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 08:32:37.662959       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 08:32:37.703490       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 08:32:37.703523       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:32:37.707907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 08:32:37.707872       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:32:37.707967       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:32:37.708681       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:32:37.809885       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.732469     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.786590     668 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.786699     668 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.786746     668 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.787635     668 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.790952     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73aa16de-9d34-4a0f-9c14-8ec0306d69f6-xtables-lock\") pod \"kube-proxy-t2z7c\" (UID: \"73aa16de-9d34-4a0f-9c14-8ec0306d69f6\") " pod="kube-system/kube-proxy-t2z7c"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.791052     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73aa16de-9d34-4a0f-9c14-8ec0306d69f6-lib-modules\") pod \"kube-proxy-t2z7c\" (UID: \"73aa16de-9d34-4a0f-9c14-8ec0306d69f6\") " pod="kube-system/kube-proxy-t2z7c"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.796149     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.796586     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: E1026 08:32:37.807739     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-366970\" already exists" pod="kube-system/etcd-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: E1026 08:32:37.807746     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-366970\" already exists" pod="kube-system/kube-apiserver-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.832971     668 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: E1026 08:32:37.846593     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-366970\" already exists" pod="kube-system/etcd-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.846636     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: E1026 08:32:37.855757     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-366970\" already exists" pod="kube-system/kube-apiserver-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.855796     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: E1026 08:32:37.865428     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-366970\" already exists" pod="kube-system/kube-controller-manager-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.865465     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: E1026 08:32:37.879380     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-366970\" already exists" pod="kube-system/kube-scheduler-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.892359     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a35b08e-08fd-4546-b4c0-79f6e3f3f29b-lib-modules\") pod \"kindnet-vzchv\" (UID: \"1a35b08e-08fd-4546-b4c0-79f6e3f3f29b\") " pod="kube-system/kindnet-vzchv"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.893169     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1a35b08e-08fd-4546-b4c0-79f6e3f3f29b-cni-cfg\") pod \"kindnet-vzchv\" (UID: \"1a35b08e-08fd-4546-b4c0-79f6e3f3f29b\") " pod="kube-system/kindnet-vzchv"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.893329     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a35b08e-08fd-4546-b4c0-79f6e3f3f29b-xtables-lock\") pod \"kindnet-vzchv\" (UID: \"1a35b08e-08fd-4546-b4c0-79f6e3f3f29b\") " pod="kube-system/kindnet-vzchv"
	Oct 26 08:32:40 newest-cni-366970 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 08:32:40 newest-cni-366970 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 08:32:40 newest-cni-366970 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-366970 -n newest-cni-366970
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-366970 -n newest-cni-366970: exit status 2 (378.67306ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-366970 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-9xk4x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-66bv2 kubernetes-dashboard-855c9754f9-vhtdc
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-366970 describe pod coredns-66bc5c9577-9xk4x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-66bv2 kubernetes-dashboard-855c9754f9-vhtdc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-366970 describe pod coredns-66bc5c9577-9xk4x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-66bv2 kubernetes-dashboard-855c9754f9-vhtdc: exit status 1 (66.774204ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-9xk4x" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-66bv2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vhtdc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-366970 describe pod coredns-66bc5c9577-9xk4x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-66bv2 kubernetes-dashboard-855c9754f9-vhtdc: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-366970
helpers_test.go:243: (dbg) docker inspect newest-cni-366970:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018",
	        "Created": "2025-10-26T08:31:59.079010399Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 283968,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:32:28.565849462Z",
	            "FinishedAt": "2025-10-26T08:32:27.66259687Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018/hostname",
	        "HostsPath": "/var/lib/docker/containers/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018/hosts",
	        "LogPath": "/var/lib/docker/containers/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018/c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018-json.log",
	        "Name": "/newest-cni-366970",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-366970:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-366970",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c16db157b89eab013aba0898ee41ce6ca0f26518d9f2d3be447ffb975ab58018",
	                "LowerDir": "/var/lib/docker/overlay2/aea0a5ed2ad3415011b41f9205844db626d056ea7edf0ff835d03501b925eccd-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/aea0a5ed2ad3415011b41f9205844db626d056ea7edf0ff835d03501b925eccd/merged",
	                "UpperDir": "/var/lib/docker/overlay2/aea0a5ed2ad3415011b41f9205844db626d056ea7edf0ff835d03501b925eccd/diff",
	                "WorkDir": "/var/lib/docker/overlay2/aea0a5ed2ad3415011b41f9205844db626d056ea7edf0ff835d03501b925eccd/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-366970",
	                "Source": "/var/lib/docker/volumes/newest-cni-366970/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-366970",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-366970",
	                "name.minikube.sigs.k8s.io": "newest-cni-366970",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "eb07d817ef11ad89bdba249a87b7cb3a2a2befa351f5a884957e1103b33cc7f2",
	            "SandboxKey": "/var/run/docker/netns/eb07d817ef11",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-366970": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:ec:b7:01:e5:a9",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "19ada62bb6d68780491bac6cfa6c8306dbe7ffb9866d24de190e8d5c662067df",
	                    "EndpointID": "4ea30addc71cebec20f0728fdc35c1b51ef3d079a91cd722086268600ebab0ae",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-366970",
	                        "c16db157b89e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-366970 -n newest-cni-366970
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-366970 -n newest-cni-366970: exit status 2 (407.710842ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-366970 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-366970 logs -n 25: (1.464050958s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ start   │ -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ image   │ no-preload-001983 image list --format=json                                                                                                                                                                                                    │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ pause   │ -p no-preload-001983 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │                     │
	│ delete  │ -p kubernetes-upgrade-462840                                                                                                                                                                                                                  │ kubernetes-upgrade-462840    │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:31 UTC │
	│ start   │ -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p no-preload-001983                                                                                                                                                                                                                          │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:31 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p no-preload-001983                                                                                                                                                                                                                          │ no-preload-001983            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p auto-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ image   │ embed-certs-752315 image list --format=json                                                                                                                                                                                                   │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ pause   │ -p embed-certs-752315 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ delete  │ -p embed-certs-752315                                                                                                                                                                                                                         │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ delete  │ -p embed-certs-752315                                                                                                                                                                                                                         │ embed-certs-752315           │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p kindnet-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio                                                                                                      │ kindnet-110992               │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-866212 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-866212 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ addons  │ enable metrics-server -p newest-cni-366970 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ stop    │ -p newest-cni-366970 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ addons  │ enable dashboard -p newest-cni-366970 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-866212 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ start   │ -p default-k8s-diff-port-866212 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ image   │ newest-cni-366970 image list --format=json                                                                                                                                                                                                    │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	│ pause   │ -p newest-cni-366970 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-366970            │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │                     │
	│ ssh     │ -p auto-110992 pgrep -a kubelet                                                                                                                                                                                                               │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:32 UTC │ 26 Oct 25 08:32 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:32:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:32:36.287815  285842 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:32:36.288166  285842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:36.288195  285842 out.go:374] Setting ErrFile to fd 2...
	I1026 08:32:36.288211  285842 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:32:36.288583  285842 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:32:36.289128  285842 out.go:368] Setting JSON to false
	I1026 08:32:36.290727  285842 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4507,"bootTime":1761463049,"procs":333,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:32:36.290879  285842 start.go:141] virtualization: kvm guest
	I1026 08:32:36.294076  285842 out.go:179] * [default-k8s-diff-port-866212] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:32:36.295341  285842 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:32:36.295379  285842 notify.go:220] Checking for updates...
	I1026 08:32:36.297602  285842 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:32:36.298732  285842 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:32:36.299959  285842 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:32:36.302162  285842 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:32:36.303428  285842 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:32:36.305362  285842 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:36.306095  285842 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:32:36.348024  285842 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:32:36.348131  285842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:36.454780  285842 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 08:32:36.43857477 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:36.454959  285842 docker.go:318] overlay module found
	I1026 08:32:36.457631  285842 out.go:179] * Using the docker driver based on existing profile
	I1026 08:32:36.458802  285842 start.go:305] selected driver: docker
	I1026 08:32:36.458817  285842 start.go:925] validating driver "docker" against &{Name:default-k8s-diff-port-866212 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-866212 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:32:36.458926  285842 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:32:36.459826  285842 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:32:36.567025  285842 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-10-26 08:32:36.543936452 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:32:36.568652  285842 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:32:36.569656  285842 cni.go:84] Creating CNI manager for ""
	I1026 08:32:36.569733  285842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:32:36.569808  285842 start.go:349] cluster config:
	{Name:default-k8s-diff-port-866212 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-866212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:32:36.571806  285842 out.go:179] * Starting "default-k8s-diff-port-866212" primary control-plane node in "default-k8s-diff-port-866212" cluster
	I1026 08:32:36.572919  285842 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:32:36.574129  285842 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:32:33.235810  278592 out.go:252]   - Booting up control plane ...
	I1026 08:32:33.235965  278592 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 08:32:33.236088  278592 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 08:32:33.236940  278592 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 08:32:33.253962  278592 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 08:32:33.254175  278592 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 08:32:33.262161  278592 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 08:32:33.262368  278592 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 08:32:33.262415  278592 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 08:32:33.371376  278592 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 08:32:33.371587  278592 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 08:32:34.372079  278592 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.000956238s
	I1026 08:32:34.376717  278592 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 08:32:34.376823  278592 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1026 08:32:34.376965  278592 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 08:32:34.377045  278592 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 08:32:36.575306  285842 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:36.575361  285842 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:32:36.575370  285842 cache.go:58] Caching tarball of preloaded images
	I1026 08:32:36.575455  285842 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:32:36.575477  285842 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:32:36.575489  285842 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:32:36.575608  285842 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/default-k8s-diff-port-866212/config.json ...
	I1026 08:32:36.603818  285842 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:32:36.603841  285842 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:32:36.603856  285842 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:32:36.603884  285842 start.go:360] acquireMachinesLock for default-k8s-diff-port-866212: {Name:mk3a220b332ac4d01b8cbea8443619f058df29a0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:32:36.603935  285842 start.go:364] duration metric: took 31.938µs to acquireMachinesLock for "default-k8s-diff-port-866212"
	I1026 08:32:36.603955  285842 start.go:96] Skipping create...Using existing machine configuration
	I1026 08:32:36.603962  285842 fix.go:54] fixHost starting: 
	I1026 08:32:36.604278  285842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-866212 --format={{.State.Status}}
	I1026 08:32:36.630357  285842 fix.go:112] recreateIfNeeded on default-k8s-diff-port-866212: state=Stopped err=<nil>
	W1026 08:32:36.630391  285842 fix.go:138] unexpected machine state, will restart: <nil>
	I1026 08:32:35.609968  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1026 08:32:35.610025  283772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1026 08:32:35.610100  283772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:35.646385  283772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:35.649239  283772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:35.655428  283772 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:32:35.655501  283772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:32:35.655613  283772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-366970
	I1026 08:32:35.692359  283772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33106 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/newest-cni-366970/id_rsa Username:docker}
	I1026 08:32:35.801856  283772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:32:35.820615  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1026 08:32:35.820652  283772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1026 08:32:35.828877  283772 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:32:35.828944  283772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:32:35.855091  283772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:32:35.857328  283772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:32:35.865028  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1026 08:32:35.865051  283772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1026 08:32:35.871835  283772 api_server.go:72] duration metric: took 318.785719ms to wait for apiserver process to appear ...
	I1026 08:32:35.871860  283772 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:32:35.871877  283772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:35.903287  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1026 08:32:35.903311  283772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1026 08:32:35.949219  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1026 08:32:35.949258  283772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1026 08:32:35.983656  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1026 08:32:35.983681  283772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1026 08:32:36.017005  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1026 08:32:36.017033  283772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1026 08:32:36.040091  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1026 08:32:36.040115  283772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1026 08:32:36.059886  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1026 08:32:36.059918  283772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1026 08:32:36.079888  283772 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 08:32:36.079912  283772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1026 08:32:36.102184  283772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1026 08:32:37.685740  283772 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 08:32:37.685769  283772 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 08:32:37.685786  283772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:37.694061  283772 api_server.go:279] https://192.168.85.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1026 08:32:37.694095  283772 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1026 08:32:37.766631  283772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.911429313s)
	I1026 08:32:37.872813  283772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:37.883698  283772 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:32:37.883723  283772 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:32:38.302538  283772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.445094432s)
	I1026 08:32:38.302672  283772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.200437641s)
	I1026 08:32:38.304280  283772 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-366970 addons enable metrics-server
	
	I1026 08:32:38.305629  283772 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, dashboard
	I1026 08:32:38.307171  283772 addons.go:514] duration metric: took 2.753568518s for enable addons: enabled=[default-storageclass storage-provisioner dashboard]
	I1026 08:32:36.883431  278592 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.506594538s
	I1026 08:32:37.219481  278592 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.842605296s
	I1026 08:32:38.878516  278592 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.50172392s
	I1026 08:32:38.890900  278592 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:32:38.901734  278592 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:32:38.911463  278592 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:32:38.911709  278592 kubeadm.go:318] [mark-control-plane] Marking the node kindnet-110992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:32:38.919399  278592 kubeadm.go:318] [bootstrap-token] Using token: 3wo4un.gmsrrfm9ihz27mks
	I1026 08:32:38.372839  283772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:38.377619  283772 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:32:38.377648  283772 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:32:38.872268  283772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:38.876719  283772 api_server.go:279] https://192.168.85.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1026 08:32:38.876757  283772 api_server.go:103] status: https://192.168.85.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1026 08:32:39.372348  283772 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:32:39.376531  283772 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 08:32:39.377607  283772 api_server.go:141] control plane version: v1.34.1
	I1026 08:32:39.377635  283772 api_server.go:131] duration metric: took 3.505767136s to wait for apiserver health ...
	I1026 08:32:39.377646  283772 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:32:39.381606  283772 system_pods.go:59] 8 kube-system pods found
	I1026 08:32:39.381654  283772 system_pods.go:61] "coredns-66bc5c9577-9xk4x" [4d2bf056-0455-412c-ab4c-5c5680aff306] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 08:32:39.381680  283772 system_pods.go:61] "etcd-newest-cni-366970" [5879f65b-4bc9-45bb-b7ea-97a3f98a0854] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:32:39.381696  283772 system_pods.go:61] "kindnet-vzchv" [1a35b08e-08fd-4546-b4c0-79f6e3f3f29b] Running
	I1026 08:32:39.381705  283772 system_pods.go:61] "kube-apiserver-newest-cni-366970" [6a35c9e5-f940-4ed4-844c-6a1314e1a01d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:32:39.381715  283772 system_pods.go:61] "kube-controller-manager-newest-cni-366970" [e32bbffb-6e52-422f-aedf-a15bd47f2e98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:32:39.381722  283772 system_pods.go:61] "kube-proxy-t2z7c" [73aa16de-9d34-4a0f-9c14-8ec0306d69f6] Running
	I1026 08:32:39.381733  283772 system_pods.go:61] "kube-scheduler-newest-cni-366970" [ad5a05b4-584f-4bd2-9f5b-1635269c14d5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:32:39.381744  283772 system_pods.go:61] "storage-provisioner" [1f9d7ffb-20ea-4a1f-a5c0-7b8b0ab3e7b0] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1026 08:32:39.381752  283772 system_pods.go:74] duration metric: took 4.100255ms to wait for pod list to return data ...
	I1026 08:32:39.381766  283772 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:32:39.384046  283772 default_sa.go:45] found service account: "default"
	I1026 08:32:39.384064  283772 default_sa.go:55] duration metric: took 2.291738ms for default service account to be created ...
	I1026 08:32:39.384074  283772 kubeadm.go:586] duration metric: took 3.831030033s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1026 08:32:39.384087  283772 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:32:39.386377  283772 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:32:39.386398  283772 node_conditions.go:123] node cpu capacity is 8
	I1026 08:32:39.386410  283772 node_conditions.go:105] duration metric: took 2.319581ms to run NodePressure ...
	I1026 08:32:39.386419  283772 start.go:241] waiting for startup goroutines ...
	I1026 08:32:39.386425  283772 start.go:246] waiting for cluster config update ...
	I1026 08:32:39.386437  283772 start.go:255] writing updated cluster config ...
	I1026 08:32:39.386676  283772 ssh_runner.go:195] Run: rm -f paused
	I1026 08:32:39.440766  283772 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:32:39.442552  283772 out.go:179] * Done! kubectl is now configured to use "newest-cni-366970" cluster and "default" namespace by default
	W1026 08:32:35.937383  273227 node_ready.go:57] node "auto-110992" has "Ready":"False" status (will retry)
	I1026 08:32:37.936631  273227 node_ready.go:49] node "auto-110992" is "Ready"
	I1026 08:32:37.936660  273227 node_ready.go:38] duration metric: took 11.003540663s for node "auto-110992" to be "Ready" ...
	I1026 08:32:37.936679  273227 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:32:37.936724  273227 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:32:37.957233  273227 api_server.go:72] duration metric: took 11.362600646s to wait for apiserver process to appear ...
	I1026 08:32:37.957287  273227 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:32:37.957309  273227 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 08:32:37.964967  273227 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 08:32:37.966140  273227 api_server.go:141] control plane version: v1.34.1
	I1026 08:32:37.966167  273227 api_server.go:131] duration metric: took 8.873333ms to wait for apiserver health ...
	I1026 08:32:37.966177  273227 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:32:37.976394  273227 system_pods.go:59] 8 kube-system pods found
	I1026 08:32:37.976476  273227 system_pods.go:61] "coredns-66bc5c9577-bdpf4" [73ee3c9d-3bdc-4511-8d23-6bc2465b3399] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:37.976494  273227 system_pods.go:61] "etcd-auto-110992" [9fc84efb-0ce7-4457-8265-f98d2842bffd] Running
	I1026 08:32:37.976508  273227 system_pods.go:61] "kindnet-clhsc" [265fe4e3-0c57-43e2-bfa9-afc141339e2a] Running
	I1026 08:32:37.976513  273227 system_pods.go:61] "kube-apiserver-auto-110992" [5ddbcb79-ca5f-49bb-aa41-7be49b985229] Running
	I1026 08:32:37.976519  273227 system_pods.go:61] "kube-controller-manager-auto-110992" [916969f2-17d5-4301-b673-b520bf8e7437] Running
	I1026 08:32:37.976528  273227 system_pods.go:61] "kube-proxy-7rts2" [05ccaf83-b5fc-4e73-99f1-1811a378c24f] Running
	I1026 08:32:37.976533  273227 system_pods.go:61] "kube-scheduler-auto-110992" [22246d33-a982-4a7f-8c81-d8c13fdc7cbe] Running
	I1026 08:32:37.976543  273227 system_pods.go:61] "storage-provisioner" [aceb04e4-cc14-41e9-80c5-2ff100e79f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:37.976551  273227 system_pods.go:74] duration metric: took 10.366956ms to wait for pod list to return data ...
	I1026 08:32:37.976573  273227 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:32:37.979360  273227 default_sa.go:45] found service account: "default"
	I1026 08:32:37.979385  273227 default_sa.go:55] duration metric: took 2.802954ms for default service account to be created ...
	I1026 08:32:37.979396  273227 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:32:37.982495  273227 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:37.982532  273227 system_pods.go:89] "coredns-66bc5c9577-bdpf4" [73ee3c9d-3bdc-4511-8d23-6bc2465b3399] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:37.982541  273227 system_pods.go:89] "etcd-auto-110992" [9fc84efb-0ce7-4457-8265-f98d2842bffd] Running
	I1026 08:32:37.982548  273227 system_pods.go:89] "kindnet-clhsc" [265fe4e3-0c57-43e2-bfa9-afc141339e2a] Running
	I1026 08:32:37.982554  273227 system_pods.go:89] "kube-apiserver-auto-110992" [5ddbcb79-ca5f-49bb-aa41-7be49b985229] Running
	I1026 08:32:37.982565  273227 system_pods.go:89] "kube-controller-manager-auto-110992" [916969f2-17d5-4301-b673-b520bf8e7437] Running
	I1026 08:32:37.982571  273227 system_pods.go:89] "kube-proxy-7rts2" [05ccaf83-b5fc-4e73-99f1-1811a378c24f] Running
	I1026 08:32:37.982576  273227 system_pods.go:89] "kube-scheduler-auto-110992" [22246d33-a982-4a7f-8c81-d8c13fdc7cbe] Running
	I1026 08:32:37.982584  273227 system_pods.go:89] "storage-provisioner" [aceb04e4-cc14-41e9-80c5-2ff100e79f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:37.982610  273227 retry.go:31] will retry after 197.407798ms: missing components: kube-dns
	I1026 08:32:38.185817  273227 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:38.185863  273227 system_pods.go:89] "coredns-66bc5c9577-bdpf4" [73ee3c9d-3bdc-4511-8d23-6bc2465b3399] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:32:38.185873  273227 system_pods.go:89] "etcd-auto-110992" [9fc84efb-0ce7-4457-8265-f98d2842bffd] Running
	I1026 08:32:38.185880  273227 system_pods.go:89] "kindnet-clhsc" [265fe4e3-0c57-43e2-bfa9-afc141339e2a] Running
	I1026 08:32:38.185885  273227 system_pods.go:89] "kube-apiserver-auto-110992" [5ddbcb79-ca5f-49bb-aa41-7be49b985229] Running
	I1026 08:32:38.185890  273227 system_pods.go:89] "kube-controller-manager-auto-110992" [916969f2-17d5-4301-b673-b520bf8e7437] Running
	I1026 08:32:38.185897  273227 system_pods.go:89] "kube-proxy-7rts2" [05ccaf83-b5fc-4e73-99f1-1811a378c24f] Running
	I1026 08:32:38.185902  273227 system_pods.go:89] "kube-scheduler-auto-110992" [22246d33-a982-4a7f-8c81-d8c13fdc7cbe] Running
	I1026 08:32:38.185908  273227 system_pods.go:89] "storage-provisioner" [aceb04e4-cc14-41e9-80c5-2ff100e79f19] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:32:38.185926  273227 retry.go:31] will retry after 350.706535ms: missing components: kube-dns
	I1026 08:32:38.541560  273227 system_pods.go:86] 8 kube-system pods found
	I1026 08:32:38.541596  273227 system_pods.go:89] "coredns-66bc5c9577-bdpf4" [73ee3c9d-3bdc-4511-8d23-6bc2465b3399] Running
	I1026 08:32:38.541604  273227 system_pods.go:89] "etcd-auto-110992" [9fc84efb-0ce7-4457-8265-f98d2842bffd] Running
	I1026 08:32:38.541610  273227 system_pods.go:89] "kindnet-clhsc" [265fe4e3-0c57-43e2-bfa9-afc141339e2a] Running
	I1026 08:32:38.541614  273227 system_pods.go:89] "kube-apiserver-auto-110992" [5ddbcb79-ca5f-49bb-aa41-7be49b985229] Running
	I1026 08:32:38.541618  273227 system_pods.go:89] "kube-controller-manager-auto-110992" [916969f2-17d5-4301-b673-b520bf8e7437] Running
	I1026 08:32:38.541624  273227 system_pods.go:89] "kube-proxy-7rts2" [05ccaf83-b5fc-4e73-99f1-1811a378c24f] Running
	I1026 08:32:38.541629  273227 system_pods.go:89] "kube-scheduler-auto-110992" [22246d33-a982-4a7f-8c81-d8c13fdc7cbe] Running
	I1026 08:32:38.541634  273227 system_pods.go:89] "storage-provisioner" [aceb04e4-cc14-41e9-80c5-2ff100e79f19] Running
	I1026 08:32:38.541644  273227 system_pods.go:126] duration metric: took 562.241317ms to wait for k8s-apps to be running ...
	I1026 08:32:38.541658  273227 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:32:38.541705  273227 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:32:38.557718  273227 system_svc.go:56] duration metric: took 16.052106ms WaitForService to wait for kubelet
	I1026 08:32:38.557752  273227 kubeadm.go:586] duration metric: took 11.96312275s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:32:38.557777  273227 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:32:38.561304  273227 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:32:38.561335  273227 node_conditions.go:123] node cpu capacity is 8
	I1026 08:32:38.561354  273227 node_conditions.go:105] duration metric: took 3.571184ms to run NodePressure ...
	I1026 08:32:38.561370  273227 start.go:241] waiting for startup goroutines ...
	I1026 08:32:38.561380  273227 start.go:246] waiting for cluster config update ...
	I1026 08:32:38.561398  273227 start.go:255] writing updated cluster config ...
	I1026 08:32:38.561703  273227 ssh_runner.go:195] Run: rm -f paused
	I1026 08:32:38.566537  273227 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:32:38.570905  273227 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bdpf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.576440  273227 pod_ready.go:94] pod "coredns-66bc5c9577-bdpf4" is "Ready"
	I1026 08:32:38.576469  273227 pod_ready.go:86] duration metric: took 5.537642ms for pod "coredns-66bc5c9577-bdpf4" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.579025  273227 pod_ready.go:83] waiting for pod "etcd-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.583532  273227 pod_ready.go:94] pod "etcd-auto-110992" is "Ready"
	I1026 08:32:38.583556  273227 pod_ready.go:86] duration metric: took 4.499354ms for pod "etcd-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.585793  273227 pod_ready.go:83] waiting for pod "kube-apiserver-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.590542  273227 pod_ready.go:94] pod "kube-apiserver-auto-110992" is "Ready"
	I1026 08:32:38.590565  273227 pod_ready.go:86] duration metric: took 4.751436ms for pod "kube-apiserver-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.592769  273227 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:38.972312  273227 pod_ready.go:94] pod "kube-controller-manager-auto-110992" is "Ready"
	I1026 08:32:38.972340  273227 pod_ready.go:86] duration metric: took 379.550394ms for pod "kube-controller-manager-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:39.171975  273227 pod_ready.go:83] waiting for pod "kube-proxy-7rts2" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:39.571227  273227 pod_ready.go:94] pod "kube-proxy-7rts2" is "Ready"
	I1026 08:32:39.571273  273227 pod_ready.go:86] duration metric: took 399.27282ms for pod "kube-proxy-7rts2" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:39.772459  273227 pod_ready.go:83] waiting for pod "kube-scheduler-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:40.171958  273227 pod_ready.go:94] pod "kube-scheduler-auto-110992" is "Ready"
	I1026 08:32:40.171987  273227 pod_ready.go:86] duration metric: took 399.502181ms for pod "kube-scheduler-auto-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:32:40.172003  273227 pod_ready.go:40] duration metric: took 1.605432017s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:32:40.228208  273227 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:32:40.230124  273227 out.go:179] * Done! kubectl is now configured to use "auto-110992" cluster and "default" namespace by default
	I1026 08:32:38.920866  278592 out.go:252]   - Configuring RBAC rules ...
	I1026 08:32:38.921026  278592 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 08:32:38.924535  278592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 08:32:38.931606  278592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 08:32:38.934544  278592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 08:32:38.937186  278592 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 08:32:38.939690  278592 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 08:32:39.284618  278592 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 08:32:39.707874  278592 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 08:32:40.285587  278592 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 08:32:40.287147  278592 kubeadm.go:318] 
	I1026 08:32:40.287273  278592 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 08:32:40.287303  278592 kubeadm.go:318] 
	I1026 08:32:40.287397  278592 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 08:32:40.287406  278592 kubeadm.go:318] 
	I1026 08:32:40.287426  278592 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 08:32:40.287478  278592 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 08:32:40.287573  278592 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 08:32:40.287588  278592 kubeadm.go:318] 
	I1026 08:32:40.287662  278592 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 08:32:40.287673  278592 kubeadm.go:318] 
	I1026 08:32:40.287749  278592 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 08:32:40.287765  278592 kubeadm.go:318] 
	I1026 08:32:40.287807  278592 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 08:32:40.287903  278592 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 08:32:40.287990  278592 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 08:32:40.288003  278592 kubeadm.go:318] 
	I1026 08:32:40.288105  278592 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 08:32:40.288201  278592 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 08:32:40.288218  278592 kubeadm.go:318] 
	I1026 08:32:40.288356  278592 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token 3wo4un.gmsrrfm9ihz27mks \
	I1026 08:32:40.288500  278592 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d \
	I1026 08:32:40.288578  278592 kubeadm.go:318] 	--control-plane 
	I1026 08:32:40.288588  278592 kubeadm.go:318] 
	I1026 08:32:40.288752  278592 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 08:32:40.288764  278592 kubeadm.go:318] 
	I1026 08:32:40.288866  278592 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token 3wo4un.gmsrrfm9ihz27mks \
	I1026 08:32:40.288993  278592 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d 
	I1026 08:32:40.291430  278592 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 08:32:40.291581  278592 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 08:32:40.291614  278592 cni.go:84] Creating CNI manager for "kindnet"
	I1026 08:32:40.297795  278592 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1026 08:32:36.632016  285842 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-866212" ...
	I1026 08:32:36.632082  285842 cli_runner.go:164] Run: docker start default-k8s-diff-port-866212
	I1026 08:32:37.007492  285842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-866212 --format={{.State.Status}}
	I1026 08:32:37.039927  285842 kic.go:430] container "default-k8s-diff-port-866212" state is running.
	I1026 08:32:37.040589  285842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-866212
	I1026 08:32:37.077473  285842 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/default-k8s-diff-port-866212/config.json ...
	I1026 08:32:37.077782  285842 machine.go:93] provisionDockerMachine start ...
	I1026 08:32:37.077878  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:37.104666  285842 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:37.104993  285842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1026 08:32:37.105019  285842 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:32:37.105825  285842 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1026 08:32:40.264381  285842 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-866212
	
	I1026 08:32:40.264410  285842 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-866212"
	I1026 08:32:40.264469  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:40.288221  285842 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:40.289706  285842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1026 08:32:40.289733  285842 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-866212 && echo "default-k8s-diff-port-866212" | sudo tee /etc/hostname
	I1026 08:32:40.465761  285842 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-866212
	
	I1026 08:32:40.465857  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:40.488600  285842 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:40.488930  285842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1026 08:32:40.488987  285842 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-866212' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-866212/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-866212' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:32:40.647729  285842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:32:40.647766  285842 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:32:40.647816  285842 ubuntu.go:190] setting up certificates
	I1026 08:32:40.647828  285842 provision.go:84] configureAuth start
	I1026 08:32:40.647890  285842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-866212
	I1026 08:32:40.678789  285842 provision.go:143] copyHostCerts
	I1026 08:32:40.678846  285842 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:32:40.678856  285842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:32:40.678917  285842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:32:40.679045  285842 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:32:40.679051  285842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:32:40.679080  285842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:32:40.679142  285842 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:32:40.679146  285842 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:32:40.679169  285842 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:32:40.679227  285842 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-866212 san=[127.0.0.1 192.168.94.2 default-k8s-diff-port-866212 localhost minikube]
	I1026 08:32:41.070782  285842 provision.go:177] copyRemoteCerts
	I1026 08:32:41.070853  285842 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:32:41.070899  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:41.093330  285842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/default-k8s-diff-port-866212/id_rsa Username:docker}
	I1026 08:32:41.203075  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:32:41.222522  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1026 08:32:41.240452  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1026 08:32:41.259597  285842 provision.go:87] duration metric: took 611.75519ms to configureAuth
	I1026 08:32:41.259626  285842 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:32:41.259833  285842 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:41.259956  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:41.281776  285842 main.go:141] libmachine: Using SSH client type: native
	I1026 08:32:41.282003  285842 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33111 <nil> <nil>}
	I1026 08:32:41.282020  285842 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:32:40.299456  278592 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1026 08:32:40.304229  278592 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 08:32:40.304294  278592 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1026 08:32:40.319470  278592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 08:32:40.620190  278592 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 08:32:40.620365  278592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:40.620474  278592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes kindnet-110992 minikube.k8s.io/updated_at=2025_10_26T08_32_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=kindnet-110992 minikube.k8s.io/primary=true
	I1026 08:32:40.722153  278592 ops.go:34] apiserver oom_adj: -16
	I1026 08:32:40.722314  278592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:41.222386  278592 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:32:41.732869  285842 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:32:41.732900  285842 machine.go:96] duration metric: took 4.655104766s to provisionDockerMachine
	I1026 08:32:41.732923  285842 start.go:293] postStartSetup for "default-k8s-diff-port-866212" (driver="docker")
	I1026 08:32:41.732941  285842 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:32:41.733008  285842 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:32:41.733050  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:41.760964  285842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/default-k8s-diff-port-866212/id_rsa Username:docker}
	I1026 08:32:41.884346  285842 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:32:41.889504  285842 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:32:41.889539  285842 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:32:41.889552  285842 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:32:41.889602  285842 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:32:41.889701  285842 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:32:41.889820  285842 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:32:41.901220  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:32:41.928121  285842 start.go:296] duration metric: took 195.18024ms for postStartSetup
	I1026 08:32:41.928279  285842 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:32:41.928361  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:41.954331  285842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/default-k8s-diff-port-866212/id_rsa Username:docker}
	I1026 08:32:42.062932  285842 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:32:42.068127  285842 fix.go:56] duration metric: took 5.46416071s for fixHost
	I1026 08:32:42.068151  285842 start.go:83] releasing machines lock for "default-k8s-diff-port-866212", held for 5.464208063s
	I1026 08:32:42.068201  285842 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-866212
	I1026 08:32:42.089561  285842 ssh_runner.go:195] Run: cat /version.json
	I1026 08:32:42.089603  285842 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:32:42.089621  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:42.089691  285842 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:32:42.113416  285842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/default-k8s-diff-port-866212/id_rsa Username:docker}
	I1026 08:32:42.114374  285842 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/default-k8s-diff-port-866212/id_rsa Username:docker}
	I1026 08:32:42.218099  285842 ssh_runner.go:195] Run: systemctl --version
	I1026 08:32:42.302045  285842 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:32:42.349225  285842 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:32:42.355647  285842 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:32:42.355713  285842 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:32:42.366855  285842 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1026 08:32:42.366925  285842 start.go:495] detecting cgroup driver to use...
	I1026 08:32:42.366969  285842 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:32:42.367034  285842 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:32:42.388432  285842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:32:42.403911  285842 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:32:42.403965  285842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:32:42.423174  285842 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:32:42.439278  285842 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:32:42.553669  285842 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:32:42.676214  285842 docker.go:234] disabling docker service ...
	I1026 08:32:42.676296  285842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:32:42.695789  285842 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:32:42.713460  285842 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:32:42.831998  285842 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:32:42.929595  285842 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:32:42.945696  285842 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:32:42.963185  285842 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:32:42.963244  285842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:42.974052  285842 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:32:42.974108  285842 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:42.983823  285842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:42.993294  285842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:43.002770  285842 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:32:43.012043  285842 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:43.022183  285842 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:43.031516  285842 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:32:43.041683  285842 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:32:43.052274  285842 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:32:43.062245  285842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:32:43.148239  285842 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:32:43.535902  285842 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:32:43.535996  285842 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:32:43.540504  285842 start.go:563] Will wait 60s for crictl version
	I1026 08:32:43.540565  285842 ssh_runner.go:195] Run: which crictl
	I1026 08:32:43.544177  285842 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:32:43.572613  285842 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:32:43.572707  285842 ssh_runner.go:195] Run: crio --version
	I1026 08:32:43.603047  285842 ssh_runner.go:195] Run: crio --version
	I1026 08:32:43.639907  285842 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:32:43.641054  285842 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-866212 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:32:43.661145  285842 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1026 08:32:43.665781  285842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:32:43.676827  285842 kubeadm.go:883] updating cluster {Name:default-k8s-diff-port-866212 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-866212 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:32:43.676953  285842 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:32:43.677010  285842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:32:43.712552  285842 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:32:43.712572  285842 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:32:43.712635  285842 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:32:43.742419  285842 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:32:43.742443  285842 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:32:43.742453  285842 kubeadm.go:934] updating node { 192.168.94.2 8444 v1.34.1 crio true true} ...
	I1026 08:32:43.742571  285842 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-866212 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-866212 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1026 08:32:43.742657  285842 ssh_runner.go:195] Run: crio config
	I1026 08:32:43.798463  285842 cni.go:84] Creating CNI manager for ""
	I1026 08:32:43.798484  285842 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1026 08:32:43.798500  285842 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:32:43.798520  285842 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-866212 NodeName:default-k8s-diff-port-866212 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:32:43.798637  285842 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-866212"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:32:43.798689  285842 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:32:43.807587  285842 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:32:43.807640  285842 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:32:43.816073  285842 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1026 08:32:43.830236  285842 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:32:43.845020  285842 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1026 08:32:43.858359  285842 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:32:43.862786  285842 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:32:43.874593  285842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:32:43.978173  285842 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:32:44.005030  285842 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/default-k8s-diff-port-866212 for IP: 192.168.94.2
	I1026 08:32:44.005052  285842 certs.go:195] generating shared ca certs ...
	I1026 08:32:44.005070  285842 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:44.005212  285842 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:32:44.005301  285842 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:32:44.005315  285842 certs.go:257] generating profile certs ...
	I1026 08:32:44.005499  285842 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/default-k8s-diff-port-866212/client.key
	I1026 08:32:44.005573  285842 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/default-k8s-diff-port-866212/apiserver.key.e19c4109
	I1026 08:32:44.005627  285842 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/default-k8s-diff-port-866212/proxy-client.key
	I1026 08:32:44.005753  285842 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:32:44.005802  285842 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:32:44.005817  285842 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:32:44.005849  285842 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:32:44.005882  285842 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:32:44.005918  285842 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:32:44.005984  285842 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:32:44.006594  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:32:44.029529  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:32:44.053429  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:32:44.076690  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:32:44.104616  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/default-k8s-diff-port-866212/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1026 08:32:44.134739  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/default-k8s-diff-port-866212/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1026 08:32:44.158419  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/default-k8s-diff-port-866212/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:32:44.180807  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/default-k8s-diff-port-866212/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 08:32:44.199799  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:32:44.219454  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:32:44.241853  285842 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:32:44.268511  285842 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:32:44.286023  285842 ssh_runner.go:195] Run: openssl version
	I1026 08:32:44.294655  285842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:32:44.305885  285842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:32:44.310054  285842 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:32:44.310117  285842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:32:44.355126  285842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:32:44.364721  285842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:32:44.374015  285842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:32:44.378470  285842 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:32:44.378534  285842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:32:44.425613  285842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:32:44.434632  285842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:32:44.443794  285842 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:32:44.448098  285842 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:32:44.448158  285842 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:32:44.491092  285842 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:32:44.501356  285842 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:32:44.505693  285842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1026 08:32:44.554874  285842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1026 08:32:44.612146  285842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1026 08:32:44.670840  285842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1026 08:32:44.722360  285842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1026 08:32:44.764462  285842 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1026 08:32:44.830187  285842 kubeadm.go:400] StartCluster: {Name:default-k8s-diff-port-866212 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-866212 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:32:44.830318  285842 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:32:44.830463  285842 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:32:44.867857  285842 cri.go:89] found id: "bac6e251286c0426a8d66c24d98eec9378377f39d55baba7bda5c9b9d7aa2fdd"
	I1026 08:32:44.867876  285842 cri.go:89] found id: "fea0de012ed14198cce29294a9f8a6de6b56997c95421d8dbd5059a83bc10c30"
	I1026 08:32:44.867882  285842 cri.go:89] found id: "2c7535c22bfefd57d71740479f1db737373736089d752091b7f4c168c93f52e2"
	I1026 08:32:44.867888  285842 cri.go:89] found id: "f1793091338642d5b5aa05b444ce27113423e5b31e8531e922ed908abb8f7ed4"
	I1026 08:32:44.867892  285842 cri.go:89] found id: ""
	I1026 08:32:44.867929  285842 ssh_runner.go:195] Run: sudo runc list -f json
	W1026 08:32:44.881869  285842 kubeadm.go:407] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:32:44Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:32:44.881937  285842 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:32:44.892130  285842 kubeadm.go:416] found existing configuration files, will attempt cluster restart
	I1026 08:32:44.892150  285842 kubeadm.go:597] restartPrimaryControlPlane start ...
	I1026 08:32:44.892187  285842 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1026 08:32:44.901595  285842 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:32:44.902432  285842 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-866212" does not appear in /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:32:44.902904  285842 kubeconfig.go:62] /home/jenkins/minikube-integration/21772-9429/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-866212" cluster setting kubeconfig missing "default-k8s-diff-port-866212" context setting]
	I1026 08:32:44.903741  285842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:44.905595  285842 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1026 08:32:44.916672  285842 kubeadm.go:634] The running cluster does not require reconfiguration: 192.168.94.2
	I1026 08:32:44.916736  285842 kubeadm.go:601] duration metric: took 24.578645ms to restartPrimaryControlPlane
	I1026 08:32:44.916760  285842 kubeadm.go:402] duration metric: took 86.603974ms to StartCluster
	I1026 08:32:44.916784  285842 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:44.916847  285842 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:32:44.918404  285842 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:32:44.918733  285842 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:32:44.918864  285842 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:32:44.918973  285842 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-866212"
	I1026 08:32:44.919000  285842 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-866212"
	W1026 08:32:44.919018  285842 addons.go:247] addon storage-provisioner should already be in state true
	I1026 08:32:44.919020  285842 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:32:44.919054  285842 host.go:66] Checking if "default-k8s-diff-port-866212" exists ...
	I1026 08:32:44.919076  285842 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-866212"
	I1026 08:32:44.919095  285842 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-866212"
	W1026 08:32:44.919106  285842 addons.go:247] addon dashboard should already be in state true
	I1026 08:32:44.919129  285842 host.go:66] Checking if "default-k8s-diff-port-866212" exists ...
	I1026 08:32:44.919570  285842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-866212 --format={{.State.Status}}
	I1026 08:32:44.919650  285842 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-866212"
	I1026 08:32:44.919685  285842 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-866212"
	I1026 08:32:44.919705  285842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-866212 --format={{.State.Status}}
	I1026 08:32:44.920007  285842 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-866212 --format={{.State.Status}}
	I1026 08:32:44.920950  285842 out.go:179] * Verifying Kubernetes components...
	I1026 08:32:44.922555  285842 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:32:44.947879  285842 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1026 08:32:44.949725  285842 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1026 08:32:44.952867  285842 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> CRI-O <==
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.039386726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.040153472Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=bcabb930-dac9-4cd2-9b7f-3f87337edb05 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.043711516Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.044607366Z" level=info msg="Ran pod sandbox 8ddb01627d910be04d2b68b43410388400cee1814fd7792289ecfa0776a7a51e with infra container: kube-system/kube-proxy-t2z7c/POD" id=bcabb930-dac9-4cd2-9b7f-3f87337edb05 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.045189274Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=cec2247a-5e0a-4747-a9ec-91f73c74e3e0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.046459168Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d964555f-0dcd-4d07-a823-6a149320f97c name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.046991834Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.048075546Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d0ed0ee3-f33c-4a52-badf-7c353d7e674d name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.049199443Z" level=info msg="Creating container: kube-system/kube-proxy-t2z7c/kube-proxy" id=c4f6e5c7-cf53-49ae-b4e0-cd23676b2470 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.04937873Z" level=info msg="Ran pod sandbox 466d5695298eb6b0c9b05683e1e3b63dc660bb0c9bd8bbb4241caab7304235a2 with infra container: kube-system/kindnet-vzchv/POD" id=cec2247a-5e0a-4747-a9ec-91f73c74e3e0 name=/runtime.v1.RuntimeService/RunPodSandbox
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.049386881Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.050710876Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=0d74784e-aea4-49dc-88a9-134dbb0a8a2e name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.052933886Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=8fc761bb-0cb7-491a-a9ec-db6e63df191a name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.054343883Z" level=info msg="Creating container: kube-system/kindnet-vzchv/kindnet-cni" id=37999dd9-9d88-4a16-8c4f-d27b5ae78131 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.05445268Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.054469751Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.055017831Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.059738386Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.06012808Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.091498969Z" level=info msg="Created container 8941a15ad64a64074d88dc093f84e213784eedc350ada1c4e023e23c2b0032b1: kube-system/kindnet-vzchv/kindnet-cni" id=37999dd9-9d88-4a16-8c4f-d27b5ae78131 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.092288929Z" level=info msg="Starting container: 8941a15ad64a64074d88dc093f84e213784eedc350ada1c4e023e23c2b0032b1" id=5c594ebe-3167-4a66-a465-f136901bea02 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.094348699Z" level=info msg="Started container" PID=1041 containerID=8941a15ad64a64074d88dc093f84e213784eedc350ada1c4e023e23c2b0032b1 description=kube-system/kindnet-vzchv/kindnet-cni id=5c594ebe-3167-4a66-a465-f136901bea02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=466d5695298eb6b0c9b05683e1e3b63dc660bb0c9bd8bbb4241caab7304235a2
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.095813806Z" level=info msg="Created container c8e42df1bc950e2d5035d1c8a56b4b29b240783d1d4f82bcad1e5aacff23eb95: kube-system/kube-proxy-t2z7c/kube-proxy" id=c4f6e5c7-cf53-49ae-b4e0-cd23676b2470 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.096632531Z" level=info msg="Starting container: c8e42df1bc950e2d5035d1c8a56b4b29b240783d1d4f82bcad1e5aacff23eb95" id=7ef4ef8f-1000-4481-a0bb-99b82ca51cc6 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:32:38 newest-cni-366970 crio[518]: time="2025-10-26T08:32:38.100188148Z" level=info msg="Started container" PID=1040 containerID=c8e42df1bc950e2d5035d1c8a56b4b29b240783d1d4f82bcad1e5aacff23eb95 description=kube-system/kube-proxy-t2z7c/kube-proxy id=7ef4ef8f-1000-4481-a0bb-99b82ca51cc6 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8ddb01627d910be04d2b68b43410388400cee1814fd7792289ecfa0776a7a51e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	8941a15ad64a6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   7 seconds ago       Running             kindnet-cni               1                   466d5695298eb       kindnet-vzchv                               kube-system
	c8e42df1bc950       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   7 seconds ago       Running             kube-proxy                1                   8ddb01627d910       kube-proxy-t2z7c                            kube-system
	8f559cba054d7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   10 seconds ago      Running             kube-apiserver            1                   56583daa78461       kube-apiserver-newest-cni-366970            kube-system
	97e8121d14be8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   10 seconds ago      Running             kube-scheduler            1                   f5fdbefb7bca6       kube-scheduler-newest-cni-366970            kube-system
	88ac6a66e7ed4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   10 seconds ago      Running             kube-controller-manager   1                   3baa5ec02076f       kube-controller-manager-newest-cni-366970   kube-system
	e66c30c535197       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   10 seconds ago      Running             etcd                      1                   e952709622848       etcd-newest-cni-366970                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-366970
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-366970
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=newest-cni-366970
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_32_17_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:32:14 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-366970
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:32:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:32:37 +0000   Sun, 26 Oct 2025 08:32:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:32:37 +0000   Sun, 26 Oct 2025 08:32:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:32:37 +0000   Sun, 26 Oct 2025 08:32:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sun, 26 Oct 2025 08:32:37 +0000   Sun, 26 Oct 2025 08:32:12 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    newest-cni-366970
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                6a456af2-76d6-4f3f-b16f-fdf9a4915e23
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-366970                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-vzchv                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-newest-cni-366970             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-newest-cni-366970    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-t2z7c                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-newest-cni-366970             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 7s                 kube-proxy       
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node newest-cni-366970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node newest-cni-366970 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node newest-cni-366970 status is now: NodeHasSufficientMemory
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node newest-cni-366970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node newest-cni-366970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node newest-cni-366970 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node newest-cni-366970 event: Registered Node newest-cni-366970 in Controller
	  Normal  Starting                 12s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  12s (x8 over 12s)  kubelet          Node newest-cni-366970 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12s (x8 over 12s)  kubelet          Node newest-cni-366970 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12s (x8 over 12s)  kubelet          Node newest-cni-366970 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           5s                 node-controller  Node newest-cni-366970 event: Registered Node newest-cni-366970 in Controller
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [e66c30c5351971b039b6f8b1a2490e148427155e2d95d45ad45c4f17d5cf00c4] <==
	{"level":"warn","ts":"2025-10-26T08:32:36.801477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.824734Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.841045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.862759Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45374","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.880624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45396","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.890273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45420","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.898264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.906328Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45434","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.914029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.922342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.932365Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45502","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.940388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.949420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.969441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.979379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.987076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:36.994976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45594","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.003900Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.017450Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.021297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.033203Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45650","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.050492Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.074192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.082791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45708","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:37.158580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45726","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 08:32:46 up  1:15,  0 user,  load average: 8.58, 4.52, 2.57
	Linux newest-cni-366970 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [8941a15ad64a64074d88dc093f84e213784eedc350ada1c4e023e23c2b0032b1] <==
	I1026 08:32:38.298858       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:32:38.299129       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1026 08:32:38.299287       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:32:38.299355       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:32:38.299372       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:32:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:32:38.677692       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:32:38.677843       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:32:38.677880       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:32:38.694627       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:32:38.978431       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:32:38.978457       1 metrics.go:72] Registering metrics
	I1026 08:32:38.978507       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [8f559cba054d71194e31a5c83b5d8755f85d3467b2fa95a0880b14a6afa70a96] <==
	I1026 08:32:37.748647       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1026 08:32:37.749133       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1026 08:32:37.750459       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 08:32:37.750507       1 aggregator.go:171] initial CRD sync complete...
	I1026 08:32:37.750516       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 08:32:37.750523       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 08:32:37.750529       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:32:37.751001       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1026 08:32:37.761800       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1026 08:32:37.763104       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 08:32:37.763172       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:32:37.773482       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 08:32:37.773512       1 policy_source.go:240] refreshing policies
	I1026 08:32:37.788767       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:32:37.793192       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:32:38.069121       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 08:32:38.105152       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:32:38.130533       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:32:38.138909       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:32:38.200335       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.213.218"}
	I1026 08:32:38.217958       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.110.53.217"}
	I1026 08:32:38.646486       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:32:41.161534       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:32:41.413649       1 controller.go:667] quota admission added evaluator for: endpoints
	I1026 08:32:41.616919       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [88ac6a66e7ed40f23b0bd951138082211c3af281b97a6d0c8e0e4286a236e5cb] <==
	I1026 08:32:41.054588       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 08:32:41.054599       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 08:32:41.057382       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1026 08:32:41.057452       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1026 08:32:41.057454       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1026 08:32:41.057722       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 08:32:41.058656       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1026 08:32:41.058693       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1026 08:32:41.058698       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 08:32:41.058789       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:32:41.058889       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1026 08:32:41.059144       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1026 08:32:41.060624       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 08:32:41.061848       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1026 08:32:41.065099       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1026 08:32:41.065119       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1026 08:32:41.065129       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1026 08:32:41.065209       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1026 08:32:41.065210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1026 08:32:41.067405       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:32:41.069561       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1026 08:32:41.069655       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1026 08:32:41.069755       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-366970"
	I1026 08:32:41.069802       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1026 08:32:41.091882       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [c8e42df1bc950e2d5035d1c8a56b4b29b240783d1d4f82bcad1e5aacff23eb95] <==
	I1026 08:32:38.140112       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:32:38.205044       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:32:38.305802       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:32:38.305841       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1026 08:32:38.305928       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:32:38.330084       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:32:38.330161       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:32:38.336350       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:32:38.336753       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:32:38.336791       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:32:38.338864       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:32:38.338888       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:32:38.338924       1 config.go:200] "Starting service config controller"
	I1026 08:32:38.338935       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:32:38.339106       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:32:38.339132       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:32:38.339713       1 config.go:309] "Starting node config controller"
	I1026 08:32:38.339747       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:32:38.339756       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:32:38.439089       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1026 08:32:38.439098       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 08:32:38.439783       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [97e8121d14be888ab9f5c7873f3f38cf64b1665f8eea11b6797f4ccc255027f4] <==
	I1026 08:32:35.847410       1 serving.go:386] Generated self-signed cert in-memory
	W1026 08:32:37.662885       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 08:32:37.662922       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 08:32:37.662934       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 08:32:37.662959       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 08:32:37.703490       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 08:32:37.703523       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:32:37.707907       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 08:32:37.707872       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:32:37.707967       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:32:37.708681       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:32:37.809885       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.732469     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.786590     668 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.786699     668 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.786746     668 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.787635     668 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.790952     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/73aa16de-9d34-4a0f-9c14-8ec0306d69f6-xtables-lock\") pod \"kube-proxy-t2z7c\" (UID: \"73aa16de-9d34-4a0f-9c14-8ec0306d69f6\") " pod="kube-system/kube-proxy-t2z7c"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.791052     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/73aa16de-9d34-4a0f-9c14-8ec0306d69f6-lib-modules\") pod \"kube-proxy-t2z7c\" (UID: \"73aa16de-9d34-4a0f-9c14-8ec0306d69f6\") " pod="kube-system/kube-proxy-t2z7c"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.796149     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.796586     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: E1026 08:32:37.807739     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-366970\" already exists" pod="kube-system/etcd-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: E1026 08:32:37.807746     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-366970\" already exists" pod="kube-system/kube-apiserver-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.832971     668 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: E1026 08:32:37.846593     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-366970\" already exists" pod="kube-system/etcd-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.846636     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: E1026 08:32:37.855757     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-366970\" already exists" pod="kube-system/kube-apiserver-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.855796     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: E1026 08:32:37.865428     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-366970\" already exists" pod="kube-system/kube-controller-manager-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.865465     668 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: E1026 08:32:37.879380     668 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-366970\" already exists" pod="kube-system/kube-scheduler-newest-cni-366970"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.892359     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a35b08e-08fd-4546-b4c0-79f6e3f3f29b-lib-modules\") pod \"kindnet-vzchv\" (UID: \"1a35b08e-08fd-4546-b4c0-79f6e3f3f29b\") " pod="kube-system/kindnet-vzchv"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.893169     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/1a35b08e-08fd-4546-b4c0-79f6e3f3f29b-cni-cfg\") pod \"kindnet-vzchv\" (UID: \"1a35b08e-08fd-4546-b4c0-79f6e3f3f29b\") " pod="kube-system/kindnet-vzchv"
	Oct 26 08:32:37 newest-cni-366970 kubelet[668]: I1026 08:32:37.893329     668 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a35b08e-08fd-4546-b4c0-79f6e3f3f29b-xtables-lock\") pod \"kindnet-vzchv\" (UID: \"1a35b08e-08fd-4546-b4c0-79f6e3f3f29b\") " pod="kube-system/kindnet-vzchv"
	Oct 26 08:32:40 newest-cni-366970 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 08:32:40 newest-cni-366970 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 08:32:40 newest-cni-366970 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-366970 -n newest-cni-366970
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-366970 -n newest-cni-366970: exit status 2 (444.506118ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-366970 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-9xk4x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-66bv2 kubernetes-dashboard-855c9754f9-vhtdc
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-366970 describe pod coredns-66bc5c9577-9xk4x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-66bv2 kubernetes-dashboard-855c9754f9-vhtdc
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-366970 describe pod coredns-66bc5c9577-9xk4x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-66bv2 kubernetes-dashboard-855c9754f9-vhtdc: exit status 1 (73.6692ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-9xk4x" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-66bv2" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-vhtdc" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-366970 describe pod coredns-66bc5c9577-9xk4x storage-provisioner dashboard-metrics-scraper-6ffb444bf9-66bv2 kubernetes-dashboard-855c9754f9-vhtdc: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (6.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-866212 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-866212 --alsologtostderr -v=1: exit status 80 (2.398010609s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-866212 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:33:40.732682  302564 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:33:40.732992  302564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:33:40.733015  302564 out.go:374] Setting ErrFile to fd 2...
	I1026 08:33:40.733023  302564 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:33:40.733356  302564 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:33:40.733673  302564 out.go:368] Setting JSON to false
	I1026 08:33:40.733729  302564 mustload.go:65] Loading cluster: default-k8s-diff-port-866212
	I1026 08:33:40.734220  302564 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:40.734777  302564 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-866212 --format={{.State.Status}}
	I1026 08:33:40.758153  302564 host.go:66] Checking if "default-k8s-diff-port-866212" exists ...
	I1026 08:33:40.758508  302564 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:33:40.832847  302564 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:85 SystemTime:2025-10-26 08:33:40.819123266 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:33:40.833755  302564 pause.go:58] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1760609724-21757/minikube-v1.37.0-1760609724-21757-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1760609724-21757-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-866212 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1026 08:33:40.837864  302564 out.go:179] * Pausing node default-k8s-diff-port-866212 ... 
	I1026 08:33:40.840048  302564 host.go:66] Checking if "default-k8s-diff-port-866212" exists ...
	I1026 08:33:40.840434  302564 ssh_runner.go:195] Run: systemctl --version
	I1026 08:33:40.840481  302564 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-866212
	I1026 08:33:40.864375  302564 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33111 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/default-k8s-diff-port-866212/id_rsa Username:docker}
	I1026 08:33:40.978395  302564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:33:41.006466  302564 pause.go:52] kubelet running: true
	I1026 08:33:41.006536  302564 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:33:41.227644  302564 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:33:41.227764  302564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:33:41.304425  302564 cri.go:89] found id: "39400809d8a6cff3435ab7c9a9b30fec1761d9d6fd7481ca9c6efb4ba004e297"
	I1026 08:33:41.304449  302564 cri.go:89] found id: "00f90ace4d0713082578d2953d41522061d3d60ac732cf7c7fec764994fed345"
	I1026 08:33:41.304454  302564 cri.go:89] found id: "5e9a95956c5c17dd1f03f2dbf5ceb7ebd79ac63c5243e8c40cb8511e2e4b6696"
	I1026 08:33:41.304457  302564 cri.go:89] found id: "7e8addd91064c0bf781cb95b46604edaa687aeffe8855673b88feb7b30405028"
	I1026 08:33:41.304460  302564 cri.go:89] found id: "0c1cd2bcf70ca230d2e4cb79ce891591e75eaf36dc70ff2f6a1c60c061b036e1"
	I1026 08:33:41.304464  302564 cri.go:89] found id: "bac6e251286c0426a8d66c24d98eec9378377f39d55baba7bda5c9b9d7aa2fdd"
	I1026 08:33:41.304467  302564 cri.go:89] found id: "fea0de012ed14198cce29294a9f8a6de6b56997c95421d8dbd5059a83bc10c30"
	I1026 08:33:41.304469  302564 cri.go:89] found id: "2c7535c22bfefd57d71740479f1db737373736089d752091b7f4c168c93f52e2"
	I1026 08:33:41.304472  302564 cri.go:89] found id: "f1793091338642d5b5aa05b444ce27113423e5b31e8531e922ed908abb8f7ed4"
	I1026 08:33:41.304485  302564 cri.go:89] found id: "26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8"
	I1026 08:33:41.304488  302564 cri.go:89] found id: "13ab62cadeb682750cf6f3a123c69691223f42268b8d1a98b2bc848057e8445b"
	I1026 08:33:41.304490  302564 cri.go:89] found id: ""
	I1026 08:33:41.304540  302564 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:33:41.317928  302564 retry.go:31] will retry after 304.259077ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:33:41Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:33:41.622387  302564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:33:41.640082  302564 pause.go:52] kubelet running: false
	I1026 08:33:41.640472  302564 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:33:41.788063  302564 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:33:41.788138  302564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:33:41.856194  302564 cri.go:89] found id: "39400809d8a6cff3435ab7c9a9b30fec1761d9d6fd7481ca9c6efb4ba004e297"
	I1026 08:33:41.856220  302564 cri.go:89] found id: "00f90ace4d0713082578d2953d41522061d3d60ac732cf7c7fec764994fed345"
	I1026 08:33:41.856224  302564 cri.go:89] found id: "5e9a95956c5c17dd1f03f2dbf5ceb7ebd79ac63c5243e8c40cb8511e2e4b6696"
	I1026 08:33:41.856227  302564 cri.go:89] found id: "7e8addd91064c0bf781cb95b46604edaa687aeffe8855673b88feb7b30405028"
	I1026 08:33:41.856230  302564 cri.go:89] found id: "0c1cd2bcf70ca230d2e4cb79ce891591e75eaf36dc70ff2f6a1c60c061b036e1"
	I1026 08:33:41.856234  302564 cri.go:89] found id: "bac6e251286c0426a8d66c24d98eec9378377f39d55baba7bda5c9b9d7aa2fdd"
	I1026 08:33:41.856236  302564 cri.go:89] found id: "fea0de012ed14198cce29294a9f8a6de6b56997c95421d8dbd5059a83bc10c30"
	I1026 08:33:41.856239  302564 cri.go:89] found id: "2c7535c22bfefd57d71740479f1db737373736089d752091b7f4c168c93f52e2"
	I1026 08:33:41.856243  302564 cri.go:89] found id: "f1793091338642d5b5aa05b444ce27113423e5b31e8531e922ed908abb8f7ed4"
	I1026 08:33:41.856276  302564 cri.go:89] found id: "26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8"
	I1026 08:33:41.856282  302564 cri.go:89] found id: "13ab62cadeb682750cf6f3a123c69691223f42268b8d1a98b2bc848057e8445b"
	I1026 08:33:41.856286  302564 cri.go:89] found id: ""
	I1026 08:33:41.856332  302564 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:33:41.868422  302564 retry.go:31] will retry after 363.820194ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:33:41Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:33:42.233020  302564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:33:42.246492  302564 pause.go:52] kubelet running: false
	I1026 08:33:42.246558  302564 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:33:42.403922  302564 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:33:42.403992  302564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:33:42.475162  302564 cri.go:89] found id: "39400809d8a6cff3435ab7c9a9b30fec1761d9d6fd7481ca9c6efb4ba004e297"
	I1026 08:33:42.475183  302564 cri.go:89] found id: "00f90ace4d0713082578d2953d41522061d3d60ac732cf7c7fec764994fed345"
	I1026 08:33:42.475188  302564 cri.go:89] found id: "5e9a95956c5c17dd1f03f2dbf5ceb7ebd79ac63c5243e8c40cb8511e2e4b6696"
	I1026 08:33:42.475191  302564 cri.go:89] found id: "7e8addd91064c0bf781cb95b46604edaa687aeffe8855673b88feb7b30405028"
	I1026 08:33:42.475194  302564 cri.go:89] found id: "0c1cd2bcf70ca230d2e4cb79ce891591e75eaf36dc70ff2f6a1c60c061b036e1"
	I1026 08:33:42.475199  302564 cri.go:89] found id: "bac6e251286c0426a8d66c24d98eec9378377f39d55baba7bda5c9b9d7aa2fdd"
	I1026 08:33:42.475203  302564 cri.go:89] found id: "fea0de012ed14198cce29294a9f8a6de6b56997c95421d8dbd5059a83bc10c30"
	I1026 08:33:42.475207  302564 cri.go:89] found id: "2c7535c22bfefd57d71740479f1db737373736089d752091b7f4c168c93f52e2"
	I1026 08:33:42.475211  302564 cri.go:89] found id: "f1793091338642d5b5aa05b444ce27113423e5b31e8531e922ed908abb8f7ed4"
	I1026 08:33:42.475239  302564 cri.go:89] found id: "26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8"
	I1026 08:33:42.475259  302564 cri.go:89] found id: "13ab62cadeb682750cf6f3a123c69691223f42268b8d1a98b2bc848057e8445b"
	I1026 08:33:42.475264  302564 cri.go:89] found id: ""
	I1026 08:33:42.475309  302564 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:33:42.487131  302564 retry.go:31] will retry after 322.38361ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:33:42Z" level=error msg="open /run/runc: no such file or directory"
	I1026 08:33:42.809677  302564 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:33:42.823145  302564 pause.go:52] kubelet running: false
	I1026 08:33:42.823202  302564 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1026 08:33:42.968460  302564 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1026 08:33:42.968535  302564 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1026 08:33:43.037136  302564 cri.go:89] found id: "39400809d8a6cff3435ab7c9a9b30fec1761d9d6fd7481ca9c6efb4ba004e297"
	I1026 08:33:43.037155  302564 cri.go:89] found id: "00f90ace4d0713082578d2953d41522061d3d60ac732cf7c7fec764994fed345"
	I1026 08:33:43.037159  302564 cri.go:89] found id: "5e9a95956c5c17dd1f03f2dbf5ceb7ebd79ac63c5243e8c40cb8511e2e4b6696"
	I1026 08:33:43.037162  302564 cri.go:89] found id: "7e8addd91064c0bf781cb95b46604edaa687aeffe8855673b88feb7b30405028"
	I1026 08:33:43.037165  302564 cri.go:89] found id: "0c1cd2bcf70ca230d2e4cb79ce891591e75eaf36dc70ff2f6a1c60c061b036e1"
	I1026 08:33:43.037169  302564 cri.go:89] found id: "bac6e251286c0426a8d66c24d98eec9378377f39d55baba7bda5c9b9d7aa2fdd"
	I1026 08:33:43.037176  302564 cri.go:89] found id: "fea0de012ed14198cce29294a9f8a6de6b56997c95421d8dbd5059a83bc10c30"
	I1026 08:33:43.037180  302564 cri.go:89] found id: "2c7535c22bfefd57d71740479f1db737373736089d752091b7f4c168c93f52e2"
	I1026 08:33:43.037182  302564 cri.go:89] found id: "f1793091338642d5b5aa05b444ce27113423e5b31e8531e922ed908abb8f7ed4"
	I1026 08:33:43.037187  302564 cri.go:89] found id: "26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8"
	I1026 08:33:43.037190  302564 cri.go:89] found id: "13ab62cadeb682750cf6f3a123c69691223f42268b8d1a98b2bc848057e8445b"
	I1026 08:33:43.037192  302564 cri.go:89] found id: ""
	I1026 08:33:43.037226  302564 ssh_runner.go:195] Run: sudo runc list -f json
	I1026 08:33:43.051607  302564 out.go:203] 
	W1026 08:33:43.053073  302564 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:33:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T08:33:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1026 08:33:43.053090  302564 out.go:285] * 
	* 
	W1026 08:33:43.057325  302564 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1026 08:33:43.058843  302564 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-866212 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-866212
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-866212:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed",
	        "Created": "2025-10-26T08:31:33.082391712Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286036,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:32:36.66934556Z",
	            "FinishedAt": "2025-10-26T08:32:35.451370424Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed/hostname",
	        "HostsPath": "/var/lib/docker/containers/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed/hosts",
	        "LogPath": "/var/lib/docker/containers/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed-json.log",
	        "Name": "/default-k8s-diff-port-866212",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-866212:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-866212",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed",
	                "LowerDir": "/var/lib/docker/overlay2/3ad3b1c0441a6dfe7d983bd846075d170734c32b25f3dbb10f22d7149ddb85fe-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ad3b1c0441a6dfe7d983bd846075d170734c32b25f3dbb10f22d7149ddb85fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ad3b1c0441a6dfe7d983bd846075d170734c32b25f3dbb10f22d7149ddb85fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ad3b1c0441a6dfe7d983bd846075d170734c32b25f3dbb10f22d7149ddb85fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-866212",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-866212/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-866212",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-866212",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-866212",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e9fc23710118247b3b6bbc3cf45f610ac1a8cd88cb60c13cb8ea05131bf603d",
	            "SandboxKey": "/var/run/docker/netns/0e9fc2371011",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-866212": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:23:ed:02:ad:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6895eb84e54294e7e4b0c2ef3aabe968c7a2cc155d3fbec01d47d6ad909fa85",
	                    "EndpointID": "38cbaf4944062491b328c3315019749f54684faeab89ff3b7a0b396025d6d07c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-866212",
	                        "9325d9bcbadd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212: exit status 2 (332.312741ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-866212 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-866212 logs -n 25: (1.254122866s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-110992 sudo cat /etc/kubernetes/kubelet.conf                                                                                                               │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo cat /var/lib/kubelet/config.yaml                                                                                                               │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo systemctl status docker --all --full --no-pager                                                                                                │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p auto-110992 sudo systemctl cat docker --no-pager                                                                                                                │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p auto-110992 sudo docker system info                                                                                                                             │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p auto-110992 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p auto-110992 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p auto-110992 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo cri-dockerd --version                                                                                                                          │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p auto-110992 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo containerd config dump                                                                                                                         │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo crio config                                                                                                                                    │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ delete  │ -p auto-110992                                                                                                                                                     │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ start   │ -p custom-flannel-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-110992        │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p kindnet-110992 pgrep -a kubelet                                                                                                                                 │ kindnet-110992               │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ image   │ default-k8s-diff-port-866212 image list --format=json                                                                                                              │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ pause   │ -p default-k8s-diff-port-866212 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:33:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:33:10.159627  297886 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:33:10.159870  297886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:33:10.159875  297886 out.go:374] Setting ErrFile to fd 2...
	I1026 08:33:10.159879  297886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:33:10.160178  297886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:33:10.160848  297886 out.go:368] Setting JSON to false
	I1026 08:33:10.162396  297886 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4541,"bootTime":1761463049,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:33:10.162507  297886 start.go:141] virtualization: kvm guest
	I1026 08:33:10.165271  297886 out.go:179] * [custom-flannel-110992] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:33:10.166596  297886 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:33:10.166615  297886 notify.go:220] Checking for updates...
	I1026 08:33:10.170244  297886 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:33:10.171584  297886 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:33:10.173077  297886 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:33:10.174451  297886 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:33:10.177163  297886 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:33:10.180549  297886 config.go:182] Loaded profile config "calico-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:10.180719  297886 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:10.180839  297886 config.go:182] Loaded profile config "kindnet-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:10.180955  297886 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:33:10.210885  297886 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:33:10.211064  297886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:33:10.287366  297886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:33:10.273883361 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:33:10.287520  297886 docker.go:318] overlay module found
	I1026 08:33:10.290226  297886 out.go:179] * Using the docker driver based on user configuration
	I1026 08:33:10.291314  297886 start.go:305] selected driver: docker
	I1026 08:33:10.291337  297886 start.go:925] validating driver "docker" against <nil>
	I1026 08:33:10.291350  297886 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:33:10.292015  297886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:33:10.373449  297886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:33:10.361573467 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:33:10.373652  297886 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:33:10.373917  297886 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:33:10.375717  297886 out.go:179] * Using Docker driver with root privileges
	I1026 08:33:10.377025  297886 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1026 08:33:10.377061  297886 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1026 08:33:10.377189  297886 start.go:349] cluster config:
	{Name:custom-flannel-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:33:10.378619  297886 out.go:179] * Starting "custom-flannel-110992" primary control-plane node in "custom-flannel-110992" cluster
	I1026 08:33:10.379645  297886 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:33:10.381064  297886 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:33:10.382259  297886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:33:10.382303  297886 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:33:10.382313  297886 cache.go:58] Caching tarball of preloaded images
	I1026 08:33:10.382374  297886 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:33:10.382446  297886 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:33:10.382458  297886 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:33:10.382563  297886 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/config.json ...
	I1026 08:33:10.382609  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/config.json: {Name:mke895c09b3c6d49dc9defb8c0e51e5fd7bf07e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:10.405519  297886 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:33:10.405543  297886 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:33:10.405562  297886 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:33:10.405596  297886 start.go:360] acquireMachinesLock for custom-flannel-110992: {Name:mk74c01f25a96369b104449921ef5549b38c2999 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:33:10.405703  297886 start.go:364] duration metric: took 89.651µs to acquireMachinesLock for "custom-flannel-110992"
	I1026 08:33:10.405726  297886 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-110992 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:33:10.405802  297886 start.go:125] createHost starting for "" (driver="docker")
	W1026 08:33:06.335336  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:08.834472  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:10.835040  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:06.852730  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	W1026 08:33:09.351972  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	W1026 08:33:11.352404  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	I1026 08:33:11.136223  290986 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.975931s
	I1026 08:33:11.575689  290986 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.415391768s
	I1026 08:33:13.162200  290986 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00197081s
	I1026 08:33:13.176216  290986 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:33:13.190513  290986 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:33:13.202329  290986 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:33:13.202630  290986 kubeadm.go:318] [mark-control-plane] Marking the node calico-110992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:33:13.211377  290986 kubeadm.go:318] [bootstrap-token] Using token: rlvwx1.6bndmtspzcvif1xf
	I1026 08:33:13.212793  290986 out.go:252]   - Configuring RBAC rules ...
	I1026 08:33:13.212981  290986 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 08:33:13.217115  290986 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 08:33:13.223211  290986 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 08:33:13.227034  290986 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 08:33:13.229865  290986 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 08:33:13.232818  290986 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 08:33:13.616074  290986 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 08:33:14.638629  290986 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 08:33:10.408485  297886 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 08:33:10.408746  297886 start.go:159] libmachine.API.Create for "custom-flannel-110992" (driver="docker")
	I1026 08:33:10.408789  297886 client.go:168] LocalClient.Create starting
	I1026 08:33:10.408858  297886 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem
	I1026 08:33:10.408900  297886 main.go:141] libmachine: Decoding PEM data...
	I1026 08:33:10.408920  297886 main.go:141] libmachine: Parsing certificate...
	I1026 08:33:10.408998  297886 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem
	I1026 08:33:10.409025  297886 main.go:141] libmachine: Decoding PEM data...
	I1026 08:33:10.409038  297886 main.go:141] libmachine: Parsing certificate...
	I1026 08:33:10.409451  297886 cli_runner.go:164] Run: docker network inspect custom-flannel-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 08:33:10.430570  297886 cli_runner.go:211] docker network inspect custom-flannel-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 08:33:10.430672  297886 network_create.go:284] running [docker network inspect custom-flannel-110992] to gather additional debugging logs...
	I1026 08:33:10.430695  297886 cli_runner.go:164] Run: docker network inspect custom-flannel-110992
	W1026 08:33:10.452218  297886 cli_runner.go:211] docker network inspect custom-flannel-110992 returned with exit code 1
	I1026 08:33:10.452259  297886 network_create.go:287] error running [docker network inspect custom-flannel-110992]: docker network inspect custom-flannel-110992: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-110992 not found
	I1026 08:33:10.452285  297886 network_create.go:289] output of [docker network inspect custom-flannel-110992]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-110992 not found
	
	** /stderr **
	I1026 08:33:10.452410  297886 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:33:10.474976  297886 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c18b67b7e42d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:70:41:72:e4:6d} reservation:<nil>}
	I1026 08:33:10.475876  297886 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd6ed9f615a5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:78:96:65:8c:60} reservation:<nil>}
	I1026 08:33:10.476795  297886 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2a983bf4577 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:62:ae:31:43:82} reservation:<nil>}
	I1026 08:33:10.477822  297886 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e95bb0}
	I1026 08:33:10.477853  297886 network_create.go:124] attempt to create docker network custom-flannel-110992 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 08:33:10.477913  297886 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-110992 custom-flannel-110992
	I1026 08:33:10.549346  297886 network_create.go:108] docker network custom-flannel-110992 192.168.76.0/24 created
	I1026 08:33:10.549385  297886 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-110992" container
	I1026 08:33:10.549457  297886 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 08:33:10.571667  297886 cli_runner.go:164] Run: docker volume create custom-flannel-110992 --label name.minikube.sigs.k8s.io=custom-flannel-110992 --label created_by.minikube.sigs.k8s.io=true
	I1026 08:33:10.595077  297886 oci.go:103] Successfully created a docker volume custom-flannel-110992
	I1026 08:33:10.595160  297886 cli_runner.go:164] Run: docker run --rm --name custom-flannel-110992-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-110992 --entrypoint /usr/bin/test -v custom-flannel-110992:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 08:33:11.098483  297886 oci.go:107] Successfully prepared a docker volume custom-flannel-110992
	I1026 08:33:11.098532  297886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:33:11.098557  297886 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 08:33:11.098627  297886 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-110992:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 08:33:15.349082  290986 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 08:33:15.350319  290986 kubeadm.go:318] 
	I1026 08:33:15.350413  290986 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 08:33:15.350431  290986 kubeadm.go:318] 
	I1026 08:33:15.350548  290986 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 08:33:15.350570  290986 kubeadm.go:318] 
	I1026 08:33:15.350603  290986 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 08:33:15.350707  290986 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 08:33:15.350801  290986 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 08:33:15.350811  290986 kubeadm.go:318] 
	I1026 08:33:15.350884  290986 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 08:33:15.350894  290986 kubeadm.go:318] 
	I1026 08:33:15.350968  290986 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 08:33:15.350987  290986 kubeadm.go:318] 
	I1026 08:33:15.351065  290986 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 08:33:15.351178  290986 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 08:33:15.351287  290986 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 08:33:15.351297  290986 kubeadm.go:318] 
	I1026 08:33:15.351396  290986 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 08:33:15.351525  290986 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 08:33:15.351554  290986 kubeadm.go:318] 
	I1026 08:33:15.351689  290986 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token rlvwx1.6bndmtspzcvif1xf \
	I1026 08:33:15.351837  290986 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d \
	I1026 08:33:15.351871  290986 kubeadm.go:318] 	--control-plane 
	I1026 08:33:15.351882  290986 kubeadm.go:318] 
	I1026 08:33:15.351989  290986 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 08:33:15.352001  290986 kubeadm.go:318] 
	I1026 08:33:15.352100  290986 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token rlvwx1.6bndmtspzcvif1xf \
	I1026 08:33:15.352234  290986 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d 
	I1026 08:33:15.355117  290986 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 08:33:15.355316  290986 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 08:33:15.355335  290986 cni.go:84] Creating CNI manager for "calico"
	I1026 08:33:15.385852  290986 out.go:179] * Configuring Calico (Container Networking Interface) ...
	W1026 08:33:13.334138  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:15.834602  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:13.852546  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	W1026 08:33:16.364526  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	I1026 08:33:15.705518  297886 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-110992:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.606832183s)
	I1026 08:33:15.705554  297886 kic.go:203] duration metric: took 4.606994333s to extract preloaded images to volume ...
	W1026 08:33:15.705657  297886 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 08:33:15.705690  297886 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 08:33:15.705735  297886 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 08:33:15.777086  297886 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-110992 --name custom-flannel-110992 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-110992 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-110992 --network custom-flannel-110992 --ip 192.168.76.2 --volume custom-flannel-110992:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 08:33:16.158091  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Running}}
	I1026 08:33:16.181688  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Status}}
	I1026 08:33:16.205977  297886 cli_runner.go:164] Run: docker exec custom-flannel-110992 stat /var/lib/dpkg/alternatives/iptables
	I1026 08:33:16.278772  297886 oci.go:144] the created container "custom-flannel-110992" has a running status.
	I1026 08:33:16.278805  297886 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa...
	I1026 08:33:16.682352  297886 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 08:33:16.712059  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Status}}
	I1026 08:33:16.731959  297886 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 08:33:16.731983  297886 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-110992 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 08:33:16.778152  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Status}}
	I1026 08:33:16.796571  297886 machine.go:93] provisionDockerMachine start ...
	I1026 08:33:16.796681  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:16.815443  297886 main.go:141] libmachine: Using SSH client type: native
	I1026 08:33:16.815673  297886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I1026 08:33:16.815686  297886 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:33:16.960568  297886 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-110992
	
	I1026 08:33:16.960597  297886 ubuntu.go:182] provisioning hostname "custom-flannel-110992"
	I1026 08:33:16.960648  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:16.980837  297886 main.go:141] libmachine: Using SSH client type: native
	I1026 08:33:16.981108  297886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I1026 08:33:16.981135  297886 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-110992 && echo "custom-flannel-110992" | sudo tee /etc/hostname
	I1026 08:33:17.136695  297886 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-110992
	
	I1026 08:33:17.136760  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:17.157882  297886 main.go:141] libmachine: Using SSH client type: native
	I1026 08:33:17.158186  297886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I1026 08:33:17.158226  297886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-110992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-110992/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-110992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:33:17.304441  297886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:33:17.304472  297886 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:33:17.304500  297886 ubuntu.go:190] setting up certificates
	I1026 08:33:17.304512  297886 provision.go:84] configureAuth start
	I1026 08:33:17.304573  297886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-110992
	I1026 08:33:17.322933  297886 provision.go:143] copyHostCerts
	I1026 08:33:17.323011  297886 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:33:17.323025  297886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:33:17.323110  297886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:33:17.323235  297886 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:33:17.323259  297886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:33:17.323314  297886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:33:17.323414  297886 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:33:17.323425  297886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:33:17.323479  297886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:33:17.323551  297886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-110992 san=[127.0.0.1 192.168.76.2 custom-flannel-110992 localhost minikube]
	I1026 08:33:17.659038  297886 provision.go:177] copyRemoteCerts
	I1026 08:33:17.659097  297886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:33:17.659136  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:17.680424  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:17.781941  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:33:17.802808  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:33:17.820888  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1026 08:33:17.840435  297886 provision.go:87] duration metric: took 535.903369ms to configureAuth
	I1026 08:33:17.840466  297886 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:33:17.840630  297886 config.go:182] Loaded profile config "custom-flannel-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:17.840739  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:17.861881  297886 main.go:141] libmachine: Using SSH client type: native
	I1026 08:33:17.862092  297886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I1026 08:33:17.862109  297886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:33:18.121645  297886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:33:18.121673  297886 machine.go:96] duration metric: took 1.325076637s to provisionDockerMachine
	I1026 08:33:18.121686  297886 client.go:171] duration metric: took 7.712888016s to LocalClient.Create
	I1026 08:33:18.121708  297886 start.go:167] duration metric: took 7.712962979s to libmachine.API.Create "custom-flannel-110992"
	I1026 08:33:18.121721  297886 start.go:293] postStartSetup for "custom-flannel-110992" (driver="docker")
	I1026 08:33:18.121732  297886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:33:18.121784  297886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:33:18.121846  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:18.143384  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:18.247728  297886 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:33:18.251576  297886 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:33:18.251605  297886 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:33:18.251618  297886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:33:18.251673  297886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:33:18.251769  297886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:33:18.251899  297886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:33:18.259702  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:33:18.283187  297886 start.go:296] duration metric: took 161.451039ms for postStartSetup
	I1026 08:33:18.283663  297886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-110992
	I1026 08:33:18.304304  297886 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/config.json ...
	I1026 08:33:18.304593  297886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:33:18.304634  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:18.326126  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:18.428435  297886 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:33:18.433231  297886 start.go:128] duration metric: took 8.027417484s to createHost
	I1026 08:33:18.433269  297886 start.go:83] releasing machines lock for "custom-flannel-110992", held for 8.027556073s
	I1026 08:33:18.433350  297886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-110992
	I1026 08:33:18.452438  297886 ssh_runner.go:195] Run: cat /version.json
	I1026 08:33:18.452482  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:18.452552  297886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:33:18.452619  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:18.471303  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:18.471511  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:18.569544  297886 ssh_runner.go:195] Run: systemctl --version
	I1026 08:33:18.629261  297886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:33:18.676972  297886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:33:18.683027  297886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:33:18.683092  297886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:33:18.716408  297886 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 08:33:18.716430  297886 start.go:495] detecting cgroup driver to use...
	I1026 08:33:18.716465  297886 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:33:18.716510  297886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:33:18.733161  297886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:33:18.745806  297886 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:33:18.745855  297886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:33:18.763344  297886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:33:18.781323  297886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:33:18.869284  297886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:33:18.969523  297886 docker.go:234] disabling docker service ...
	I1026 08:33:18.969598  297886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:33:18.996066  297886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:33:19.011388  297886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:33:19.125072  297886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:33:19.229067  297886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:33:19.244169  297886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:33:19.261359  297886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:33:19.261418  297886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.273589  297886 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:33:19.273641  297886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.284050  297886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.294148  297886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.304083  297886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:33:19.312611  297886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.322147  297886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.337373  297886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.346603  297886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:33:19.354948  297886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:33:19.362702  297886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:33:19.443108  297886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:33:19.561078  297886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:33:19.561133  297886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:33:19.565383  297886 start.go:563] Will wait 60s for crictl version
	I1026 08:33:19.565438  297886 ssh_runner.go:195] Run: which crictl
	I1026 08:33:19.569168  297886 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:33:19.595993  297886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:33:19.596075  297886 ssh_runner.go:195] Run: crio --version
	I1026 08:33:19.626325  297886 ssh_runner.go:195] Run: crio --version
	I1026 08:33:19.658625  297886 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:33:15.390234  290986 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 08:33:15.390278  290986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1026 08:33:15.407649  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 08:33:16.488841  290986 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.081149635s)
	I1026 08:33:16.488904  290986 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 08:33:16.489267  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:16.489341  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-110992 minikube.k8s.io/updated_at=2025_10_26T08_33_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=calico-110992 minikube.k8s.io/primary=true
	I1026 08:33:16.504854  290986 ops.go:34] apiserver oom_adj: -16
	I1026 08:33:16.608100  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:17.108155  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:17.609108  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:18.108150  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:18.608980  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:19.108372  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:19.609114  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:19.661128  297886 cli_runner.go:164] Run: docker network inspect custom-flannel-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:33:19.681864  297886 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 08:33:19.686032  297886 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:33:19.696924  297886 kubeadm.go:883] updating cluster {Name:custom-flannel-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-110992 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:33:19.697060  297886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:33:19.697114  297886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:33:19.730351  297886 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:33:19.730377  297886 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:33:19.730429  297886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:33:19.758293  297886 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:33:19.758318  297886 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:33:19.758327  297886 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 08:33:19.758421  297886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-110992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1026 08:33:19.758502  297886 ssh_runner.go:195] Run: crio config
	I1026 08:33:19.808371  297886 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1026 08:33:19.808417  297886 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:33:19.808447  297886 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-110992 NodeName:custom-flannel-110992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:33:19.808578  297886 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-110992"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:33:19.808644  297886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:33:19.819195  297886 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:33:19.819291  297886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:33:19.835557  297886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1026 08:33:19.852845  297886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:33:19.870227  297886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1026 08:33:19.883934  297886 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:33:19.887638  297886 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:33:19.897601  297886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:33:19.991885  297886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:33:20.025278  297886 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992 for IP: 192.168.76.2
	I1026 08:33:20.025302  297886 certs.go:195] generating shared ca certs ...
	I1026 08:33:20.025323  297886 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.025477  297886 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:33:20.025534  297886 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:33:20.025546  297886 certs.go:257] generating profile certs ...
	I1026 08:33:20.025599  297886 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/client.key
	I1026 08:33:20.025611  297886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/client.crt with IP's: []
	I1026 08:33:20.078753  297886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/client.crt ...
	I1026 08:33:20.078783  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/client.crt: {Name:mkf9cfad17be61bc1319469d32827e7697fee50a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.078981  297886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/client.key ...
	I1026 08:33:20.078998  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/client.key: {Name:mk24a463c89249ff97baea6d0c80b2fbfc1e46b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.079103  297886 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.key.43060bde
	I1026 08:33:20.079129  297886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.crt.43060bde with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 08:33:20.108951  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:20.185950  290986 kubeadm.go:1113] duration metric: took 3.696749644s to wait for elevateKubeSystemPrivileges
	I1026 08:33:20.185987  290986 kubeadm.go:402] duration metric: took 17.693833072s to StartCluster
	I1026 08:33:20.186006  290986 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.186071  290986 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:33:20.187283  290986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.187489  290986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 08:33:20.187499  290986 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:33:20.187591  290986 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:33:20.187704  290986 addons.go:69] Setting storage-provisioner=true in profile "calico-110992"
	I1026 08:33:20.187726  290986 addons.go:238] Setting addon storage-provisioner=true in "calico-110992"
	I1026 08:33:20.187760  290986 host.go:66] Checking if "calico-110992" exists ...
	I1026 08:33:20.187778  290986 addons.go:69] Setting default-storageclass=true in profile "calico-110992"
	I1026 08:33:20.187801  290986 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-110992"
	I1026 08:33:20.187682  290986 config.go:182] Loaded profile config "calico-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:20.188577  290986 cli_runner.go:164] Run: docker container inspect calico-110992 --format={{.State.Status}}
	I1026 08:33:20.188656  290986 cli_runner.go:164] Run: docker container inspect calico-110992 --format={{.State.Status}}
	I1026 08:33:20.191942  290986 out.go:179] * Verifying Kubernetes components...
	I1026 08:33:20.193489  290986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:33:20.217421  290986 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:33:20.218952  290986 addons.go:238] Setting addon default-storageclass=true in "calico-110992"
	I1026 08:33:20.219070  290986 host.go:66] Checking if "calico-110992" exists ...
	I1026 08:33:20.218979  290986 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:33:20.219144  290986 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:33:20.219203  290986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-110992
	I1026 08:33:20.219471  290986 cli_runner.go:164] Run: docker container inspect calico-110992 --format={{.State.Status}}
	I1026 08:33:20.253351  290986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/calico-110992/id_rsa Username:docker}
	I1026 08:33:20.256191  290986 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:33:20.256213  290986 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:33:20.256409  290986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-110992
	I1026 08:33:20.282596  290986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/calico-110992/id_rsa Username:docker}
	I1026 08:33:20.301461  290986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 08:33:20.346748  290986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:33:20.383770  290986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:33:20.433235  290986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:33:20.503779  290986 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 08:33:20.505054  290986 node_ready.go:35] waiting up to 15m0s for node "calico-110992" to be "Ready" ...
	I1026 08:33:20.722723  290986 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1026 08:33:17.834747  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:19.835021  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	I1026 08:33:20.311137  297886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.crt.43060bde ...
	I1026 08:33:20.311168  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.crt.43060bde: {Name:mk4031590c5d52a48bbf59d2a7a3c9dc14a78dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.311404  297886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.key.43060bde ...
	I1026 08:33:20.311427  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.key.43060bde: {Name:mk238b6e77d40947448f51b642a99e2a2db52447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.311546  297886 certs.go:382] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.crt.43060bde -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.crt
	I1026 08:33:20.311642  297886 certs.go:386] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.key.43060bde -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.key
	I1026 08:33:20.311727  297886 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.key
	I1026 08:33:20.311749  297886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.crt with IP's: []
	I1026 08:33:20.540592  297886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.crt ...
	I1026 08:33:20.540679  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.crt: {Name:mkb6ba530b5c93eca805e4b599bc336817efe544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.540892  297886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.key ...
	I1026 08:33:20.540939  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.key: {Name:mkad407034e7e52156d860185c4a4dac9857a417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.541211  297886 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:33:20.541300  297886 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:33:20.541329  297886 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:33:20.541396  297886 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:33:20.541447  297886 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:33:20.541497  297886 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:33:20.541568  297886 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:33:20.542445  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:33:20.565951  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:33:20.585809  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:33:20.604494  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:33:20.623740  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 08:33:20.642477  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:33:20.661443  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:33:20.682669  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 08:33:20.704364  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:33:20.727187  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:33:20.748749  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:33:20.771002  297886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:33:20.784879  297886 ssh_runner.go:195] Run: openssl version
	I1026 08:33:20.792074  297886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:33:20.802194  297886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:33:20.806530  297886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:33:20.806589  297886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:33:20.846226  297886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:33:20.856299  297886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:33:20.865343  297886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:33:20.869325  297886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:33:20.869366  297886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:33:20.906905  297886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:33:20.917156  297886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:33:20.927077  297886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:33:20.931045  297886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:33:20.931107  297886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:33:20.976592  297886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:33:20.987042  297886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:33:20.991569  297886 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 08:33:20.991634  297886 kubeadm.go:400] StartCluster: {Name:custom-flannel-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:33:20.991716  297886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:33:20.991775  297886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:33:21.033486  297886 cri.go:89] found id: ""
	I1026 08:33:21.033561  297886 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:33:21.048893  297886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 08:33:21.059920  297886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 08:33:21.059989  297886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 08:33:21.071180  297886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 08:33:21.071207  297886 kubeadm.go:157] found existing configuration files:
	
	I1026 08:33:21.071278  297886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 08:33:21.082833  297886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 08:33:21.082898  297886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 08:33:21.093537  297886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 08:33:21.103413  297886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 08:33:21.103481  297886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 08:33:21.113157  297886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 08:33:21.123721  297886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 08:33:21.123779  297886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 08:33:21.133865  297886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 08:33:21.144812  297886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 08:33:21.144880  297886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 08:33:21.155342  297886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 08:33:21.205677  297886 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 08:33:21.205742  297886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 08:33:21.234895  297886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 08:33:21.235030  297886 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 08:33:21.235099  297886 kubeadm.go:318] OS: Linux
	I1026 08:33:21.235311  297886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 08:33:21.235417  297886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 08:33:21.235501  297886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 08:33:21.235585  297886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 08:33:21.235670  297886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 08:33:21.235754  297886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 08:33:21.235826  297886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 08:33:21.235877  297886 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 08:33:21.315186  297886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 08:33:21.315368  297886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 08:33:21.315503  297886 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 08:33:21.328277  297886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1026 08:33:18.852227  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	W1026 08:33:20.852422  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	I1026 08:33:20.724009  290986 addons.go:514] duration metric: took 536.413433ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 08:33:21.009612  290986 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-110992" context rescaled to 1 replicas
	W1026 08:33:22.508334  290986 node_ready.go:57] node "calico-110992" has "Ready":"False" status (will retry)
	I1026 08:33:24.508345  290986 node_ready.go:49] node "calico-110992" is "Ready"
	I1026 08:33:24.508375  290986 node_ready.go:38] duration metric: took 4.003295607s for node "calico-110992" to be "Ready" ...
	I1026 08:33:24.508390  290986 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:33:24.508449  290986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:33:24.524003  290986 api_server.go:72] duration metric: took 4.336466809s to wait for apiserver process to appear ...
	I1026 08:33:24.524033  290986 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:33:24.524056  290986 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:33:24.530501  290986 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 08:33:24.531804  290986 api_server.go:141] control plane version: v1.34.1
	I1026 08:33:24.531836  290986 api_server.go:131] duration metric: took 7.796123ms to wait for apiserver health ...
	I1026 08:33:24.531847  290986 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:33:24.535744  290986 system_pods.go:59] 9 kube-system pods found
	I1026 08:33:24.535783  290986 system_pods.go:61] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:24.535795  290986 system_pods.go:61] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:24.535806  290986 system_pods.go:61] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:24.535812  290986 system_pods.go:61] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:24.535818  290986 system_pods.go:61] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:24.535821  290986 system_pods.go:61] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:24.535825  290986 system_pods.go:61] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:24.535830  290986 system_pods.go:61] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:24.535835  290986 system_pods.go:61] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:24.535842  290986 system_pods.go:74] duration metric: took 3.98769ms to wait for pod list to return data ...
	I1026 08:33:24.535852  290986 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:33:24.538590  290986 default_sa.go:45] found service account: "default"
	I1026 08:33:24.538610  290986 default_sa.go:55] duration metric: took 2.752036ms for default service account to be created ...
	I1026 08:33:24.538620  290986 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:33:24.542184  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:24.542231  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:24.542268  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:24.542279  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:24.542291  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:24.542301  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:24.542307  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:24.542314  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:24.542319  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:24.542329  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:24.542366  290986 retry.go:31] will retry after 194.395436ms: missing components: kube-dns
	I1026 08:33:24.742475  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:24.742518  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:24.742532  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:24.742542  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:24.742548  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:24.742558  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:24.742565  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:24.742571  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:24.742576  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:24.742584  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:24.742602  290986 retry.go:31] will retry after 353.307385ms: missing components: kube-dns
	I1026 08:33:21.331168  297886 out.go:252]   - Generating certificates and keys ...
	I1026 08:33:21.331331  297886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 08:33:21.331445  297886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 08:33:22.015283  297886 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 08:33:22.618691  297886 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 08:33:23.104400  297886 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 08:33:23.361892  297886 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 08:33:23.778518  297886 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 08:33:23.778702  297886 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-110992 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 08:33:24.343893  297886 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 08:33:24.344080  297886 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-110992 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 08:33:24.643312  297886 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 08:33:24.895308  297886 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	W1026 08:33:22.334949  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:24.835300  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:23.351901  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	W1026 08:33:25.352221  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	I1026 08:33:25.176121  297886 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 08:33:25.176324  297886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 08:33:25.683609  297886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 08:33:26.263789  297886 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 08:33:26.577514  297886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 08:33:27.292539  297886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 08:33:27.451683  297886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 08:33:27.452192  297886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 08:33:27.457326  297886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 08:33:26.335292  285842 pod_ready.go:94] pod "coredns-66bc5c9577-h4dk5" is "Ready"
	I1026 08:33:26.335321  285842 pod_ready.go:86] duration metric: took 38.006630881s for pod "coredns-66bc5c9577-h4dk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.338390  285842 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.343606  285842 pod_ready.go:94] pod "etcd-default-k8s-diff-port-866212" is "Ready"
	I1026 08:33:26.343629  285842 pod_ready.go:86] duration metric: took 5.218305ms for pod "etcd-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.345751  285842 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.350501  285842 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-866212" is "Ready"
	I1026 08:33:26.350527  285842 pod_ready.go:86] duration metric: took 4.751106ms for pod "kube-apiserver-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.354562  285842 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.533352  285842 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-866212" is "Ready"
	I1026 08:33:26.533378  285842 pod_ready.go:86] duration metric: took 178.790577ms for pod "kube-controller-manager-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.732903  285842 pod_ready.go:83] waiting for pod "kube-proxy-m4gfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.132984  285842 pod_ready.go:94] pod "kube-proxy-m4gfc" is "Ready"
	I1026 08:33:27.133011  285842 pod_ready.go:86] duration metric: took 400.078265ms for pod "kube-proxy-m4gfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.332350  285842 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.732939  285842 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-866212" is "Ready"
	I1026 08:33:27.732969  285842 pod_ready.go:86] duration metric: took 400.58918ms for pod "kube-scheduler-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.732991  285842 pod_ready.go:40] duration metric: took 39.407435398s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:33:27.790045  285842 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:33:27.814280  285842 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-866212" cluster and "default" namespace by default
	I1026 08:33:26.852036  278592 node_ready.go:49] node "kindnet-110992" is "Ready"
	I1026 08:33:26.852069  278592 node_ready.go:38] duration metric: took 41.003339979s for node "kindnet-110992" to be "Ready" ...
	I1026 08:33:26.852086  278592 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:33:26.852160  278592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:33:26.868892  278592 api_server.go:72] duration metric: took 41.533945796s to wait for apiserver process to appear ...
	I1026 08:33:26.868922  278592 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:33:26.868946  278592 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 08:33:26.874559  278592 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 08:33:26.876180  278592 api_server.go:141] control plane version: v1.34.1
	I1026 08:33:26.876208  278592 api_server.go:131] duration metric: took 7.277307ms to wait for apiserver health ...
	I1026 08:33:26.876218  278592 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:33:26.879868  278592 system_pods.go:59] 8 kube-system pods found
	I1026 08:33:26.879909  278592 system_pods.go:61] "coredns-66bc5c9577-ttrbv" [35230487-1224-48fb-a9c4-b038c685ec4d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:26.879917  278592 system_pods.go:61] "etcd-kindnet-110992" [b25ca5c2-33c4-4011-9f80-bcab5eeb5ed3] Running
	I1026 08:33:26.879926  278592 system_pods.go:61] "kindnet-hxqqs" [c9f8f4f2-3683-4a4c-b19b-1758fdfb707d] Running
	I1026 08:33:26.879937  278592 system_pods.go:61] "kube-apiserver-kindnet-110992" [f18c71e6-a4d4-4c40-91da-c8dfa305c1b2] Running
	I1026 08:33:26.879943  278592 system_pods.go:61] "kube-controller-manager-kindnet-110992" [56ea173a-868d-4ef6-a79a-20444cccf927] Running
	I1026 08:33:26.879954  278592 system_pods.go:61] "kube-proxy-kfcp7" [0a2a6415-a06d-45ff-bb75-332e5725c78d] Running
	I1026 08:33:26.879959  278592 system_pods.go:61] "kube-scheduler-kindnet-110992" [7190bcbc-eb05-438a-b011-c23a0f50d594] Running
	I1026 08:33:26.879967  278592 system_pods.go:61] "storage-provisioner" [cded74ea-fdd3-46bd-8f2b-eddb96fb8e01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:26.879979  278592 system_pods.go:74] duration metric: took 3.754746ms to wait for pod list to return data ...
	I1026 08:33:26.879992  278592 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:33:26.883013  278592 default_sa.go:45] found service account: "default"
	I1026 08:33:26.883039  278592 default_sa.go:55] duration metric: took 3.035973ms for default service account to be created ...
	I1026 08:33:26.883051  278592 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:33:26.885872  278592 system_pods.go:86] 8 kube-system pods found
	I1026 08:33:26.885903  278592 system_pods.go:89] "coredns-66bc5c9577-ttrbv" [35230487-1224-48fb-a9c4-b038c685ec4d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:26.885911  278592 system_pods.go:89] "etcd-kindnet-110992" [b25ca5c2-33c4-4011-9f80-bcab5eeb5ed3] Running
	I1026 08:33:26.885919  278592 system_pods.go:89] "kindnet-hxqqs" [c9f8f4f2-3683-4a4c-b19b-1758fdfb707d] Running
	I1026 08:33:26.885924  278592 system_pods.go:89] "kube-apiserver-kindnet-110992" [f18c71e6-a4d4-4c40-91da-c8dfa305c1b2] Running
	I1026 08:33:26.885929  278592 system_pods.go:89] "kube-controller-manager-kindnet-110992" [56ea173a-868d-4ef6-a79a-20444cccf927] Running
	I1026 08:33:26.885938  278592 system_pods.go:89] "kube-proxy-kfcp7" [0a2a6415-a06d-45ff-bb75-332e5725c78d] Running
	I1026 08:33:26.885942  278592 system_pods.go:89] "kube-scheduler-kindnet-110992" [7190bcbc-eb05-438a-b011-c23a0f50d594] Running
	I1026 08:33:26.885952  278592 system_pods.go:89] "storage-provisioner" [cded74ea-fdd3-46bd-8f2b-eddb96fb8e01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:26.885984  278592 retry.go:31] will retry after 231.523004ms: missing components: kube-dns
	I1026 08:33:27.124430  278592 system_pods.go:86] 8 kube-system pods found
	I1026 08:33:27.124474  278592 system_pods.go:89] "coredns-66bc5c9577-ttrbv" [35230487-1224-48fb-a9c4-b038c685ec4d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:27.124484  278592 system_pods.go:89] "etcd-kindnet-110992" [b25ca5c2-33c4-4011-9f80-bcab5eeb5ed3] Running
	I1026 08:33:27.124495  278592 system_pods.go:89] "kindnet-hxqqs" [c9f8f4f2-3683-4a4c-b19b-1758fdfb707d] Running
	I1026 08:33:27.124502  278592 system_pods.go:89] "kube-apiserver-kindnet-110992" [f18c71e6-a4d4-4c40-91da-c8dfa305c1b2] Running
	I1026 08:33:27.124509  278592 system_pods.go:89] "kube-controller-manager-kindnet-110992" [56ea173a-868d-4ef6-a79a-20444cccf927] Running
	I1026 08:33:27.124516  278592 system_pods.go:89] "kube-proxy-kfcp7" [0a2a6415-a06d-45ff-bb75-332e5725c78d] Running
	I1026 08:33:27.124521  278592 system_pods.go:89] "kube-scheduler-kindnet-110992" [7190bcbc-eb05-438a-b011-c23a0f50d594] Running
	I1026 08:33:27.124533  278592 system_pods.go:89] "storage-provisioner" [cded74ea-fdd3-46bd-8f2b-eddb96fb8e01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:27.124553  278592 retry.go:31] will retry after 262.637807ms: missing components: kube-dns
	I1026 08:33:27.392833  278592 system_pods.go:86] 8 kube-system pods found
	I1026 08:33:27.392870  278592 system_pods.go:89] "coredns-66bc5c9577-ttrbv" [35230487-1224-48fb-a9c4-b038c685ec4d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:27.392877  278592 system_pods.go:89] "etcd-kindnet-110992" [b25ca5c2-33c4-4011-9f80-bcab5eeb5ed3] Running
	I1026 08:33:27.392886  278592 system_pods.go:89] "kindnet-hxqqs" [c9f8f4f2-3683-4a4c-b19b-1758fdfb707d] Running
	I1026 08:33:27.392892  278592 system_pods.go:89] "kube-apiserver-kindnet-110992" [f18c71e6-a4d4-4c40-91da-c8dfa305c1b2] Running
	I1026 08:33:27.392897  278592 system_pods.go:89] "kube-controller-manager-kindnet-110992" [56ea173a-868d-4ef6-a79a-20444cccf927] Running
	I1026 08:33:27.392904  278592 system_pods.go:89] "kube-proxy-kfcp7" [0a2a6415-a06d-45ff-bb75-332e5725c78d] Running
	I1026 08:33:27.392908  278592 system_pods.go:89] "kube-scheduler-kindnet-110992" [7190bcbc-eb05-438a-b011-c23a0f50d594] Running
	I1026 08:33:27.392916  278592 system_pods.go:89] "storage-provisioner" [cded74ea-fdd3-46bd-8f2b-eddb96fb8e01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:27.392932  278592 retry.go:31] will retry after 396.466485ms: missing components: kube-dns
	I1026 08:33:27.794753  278592 system_pods.go:86] 8 kube-system pods found
	I1026 08:33:27.794796  278592 system_pods.go:89] "coredns-66bc5c9577-ttrbv" [35230487-1224-48fb-a9c4-b038c685ec4d] Running
	I1026 08:33:27.794803  278592 system_pods.go:89] "etcd-kindnet-110992" [b25ca5c2-33c4-4011-9f80-bcab5eeb5ed3] Running
	I1026 08:33:27.794809  278592 system_pods.go:89] "kindnet-hxqqs" [c9f8f4f2-3683-4a4c-b19b-1758fdfb707d] Running
	I1026 08:33:27.794814  278592 system_pods.go:89] "kube-apiserver-kindnet-110992" [f18c71e6-a4d4-4c40-91da-c8dfa305c1b2] Running
	I1026 08:33:27.794819  278592 system_pods.go:89] "kube-controller-manager-kindnet-110992" [56ea173a-868d-4ef6-a79a-20444cccf927] Running
	I1026 08:33:27.794825  278592 system_pods.go:89] "kube-proxy-kfcp7" [0a2a6415-a06d-45ff-bb75-332e5725c78d] Running
	I1026 08:33:27.794831  278592 system_pods.go:89] "kube-scheduler-kindnet-110992" [7190bcbc-eb05-438a-b011-c23a0f50d594] Running
	I1026 08:33:27.794837  278592 system_pods.go:89] "storage-provisioner" [cded74ea-fdd3-46bd-8f2b-eddb96fb8e01] Running
	I1026 08:33:27.794849  278592 system_pods.go:126] duration metric: took 911.792108ms to wait for k8s-apps to be running ...
	I1026 08:33:27.794863  278592 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:33:27.794912  278592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:33:27.810080  278592 system_svc.go:56] duration metric: took 15.211284ms WaitForService to wait for kubelet
	I1026 08:33:27.810104  278592 kubeadm.go:586] duration metric: took 42.475165714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:33:27.810121  278592 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:33:27.867034  278592 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:33:27.867065  278592 node_conditions.go:123] node cpu capacity is 8
	I1026 08:33:27.867081  278592 node_conditions.go:105] duration metric: took 56.954334ms to run NodePressure ...
	I1026 08:33:27.867096  278592 start.go:241] waiting for startup goroutines ...
	I1026 08:33:27.867109  278592 start.go:246] waiting for cluster config update ...
	I1026 08:33:27.867123  278592 start.go:255] writing updated cluster config ...
	I1026 08:33:27.867426  278592 ssh_runner.go:195] Run: rm -f paused
	I1026 08:33:27.873059  278592 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:33:27.877536  278592 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ttrbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.884920  278592 pod_ready.go:94] pod "coredns-66bc5c9577-ttrbv" is "Ready"
	I1026 08:33:27.884953  278592 pod_ready.go:86] duration metric: took 7.39026ms for pod "coredns-66bc5c9577-ttrbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.887303  278592 pod_ready.go:83] waiting for pod "etcd-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.891874  278592 pod_ready.go:94] pod "etcd-kindnet-110992" is "Ready"
	I1026 08:33:27.891895  278592 pod_ready.go:86] duration metric: took 4.572409ms for pod "etcd-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.894092  278592 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.898352  278592 pod_ready.go:94] pod "kube-apiserver-kindnet-110992" is "Ready"
	I1026 08:33:27.898384  278592 pod_ready.go:86] duration metric: took 4.257267ms for pod "kube-apiserver-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.900375  278592 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:28.389650  278592 pod_ready.go:94] pod "kube-controller-manager-kindnet-110992" is "Ready"
	I1026 08:33:28.389675  278592 pod_ready.go:86] duration metric: took 489.281702ms for pod "kube-controller-manager-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:28.478506  278592 pod_ready.go:83] waiting for pod "kube-proxy-kfcp7" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:28.877973  278592 pod_ready.go:94] pod "kube-proxy-kfcp7" is "Ready"
	I1026 08:33:28.878007  278592 pod_ready.go:86] duration metric: took 399.472679ms for pod "kube-proxy-kfcp7" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:29.078527  278592 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:29.478790  278592 pod_ready.go:94] pod "kube-scheduler-kindnet-110992" is "Ready"
	I1026 08:33:29.478820  278592 pod_ready.go:86] duration metric: took 400.264656ms for pod "kube-scheduler-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:29.478834  278592 pod_ready.go:40] duration metric: took 1.60571722s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:33:29.540118  278592 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:33:29.542480  278592 out.go:179] * Done! kubectl is now configured to use "kindnet-110992" cluster and "default" namespace by default
	I1026 08:33:25.099957  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:25.099988  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:25.099997  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:25.100003  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:25.100008  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:25.100015  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:25.100019  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:25.100024  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:25.100035  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:25.100040  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:25.100062  290986 retry.go:31] will retry after 429.717297ms: missing components: kube-dns
	I1026 08:33:25.535216  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:25.535287  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:25.535301  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:25.535314  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:25.535332  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:25.535343  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:25.535352  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:25.535358  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:25.535367  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:25.535373  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:25.535400  290986 retry.go:31] will retry after 547.962369ms: missing components: kube-dns
	I1026 08:33:26.089280  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:26.089328  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:26.089342  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:26.089353  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:26.089367  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:26.089374  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:26.089380  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:26.089386  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:26.089392  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:26.089397  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:26.089416  290986 retry.go:31] will retry after 584.273436ms: missing components: kube-dns
	I1026 08:33:26.679010  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:26.679050  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:26.679068  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:26.679077  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:26.679084  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:26.679092  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:26.679110  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:26.679116  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:26.679121  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:26.679126  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:26.679146  290986 retry.go:31] will retry after 910.300944ms: missing components: kube-dns
	I1026 08:33:27.593986  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:27.594022  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:27.594034  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:27.594055  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:27.594076  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:27.594085  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:27.594090  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:27.594095  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:27.594101  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:27.594107  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:27.594130  290986 retry.go:31] will retry after 1.010302158s: missing components: kube-dns
	I1026 08:33:28.609186  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:28.609222  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:28.609234  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:28.609244  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:28.609281  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:28.609289  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:28.609297  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:28.609303  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:28.609311  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:28.609316  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:28.609340  290986 retry.go:31] will retry after 1.223875932s: missing components: kube-dns
	I1026 08:33:29.837467  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:29.837502  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:29.837515  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:29.837525  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:29.837531  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:29.837537  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:29.837544  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:29.837550  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:29.837557  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:29.837565  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:29.837583  290986 retry.go:31] will retry after 1.661573564s: missing components: kube-dns
	I1026 08:33:27.459108  297886 out.go:252]   - Booting up control plane ...
	I1026 08:33:27.459280  297886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 08:33:27.460652  297886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 08:33:27.461830  297886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 08:33:27.481200  297886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 08:33:27.481380  297886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 08:33:27.490068  297886 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 08:33:27.490380  297886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 08:33:27.490423  297886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 08:33:27.630525  297886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 08:33:27.630675  297886 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 08:33:28.634366  297886 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001970821s
	I1026 08:33:28.636662  297886 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 08:33:28.636814  297886 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1026 08:33:28.636934  297886 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 08:33:28.637038  297886 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 08:33:30.141472  297886 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.504859912s
	I1026 08:33:30.964619  297886 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.328035963s
	I1026 08:33:32.638296  297886 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001656106s
	I1026 08:33:32.649453  297886 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:33:32.662328  297886 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:33:32.672126  297886 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:33:32.672402  297886 kubeadm.go:318] [mark-control-plane] Marking the node custom-flannel-110992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:33:32.681301  297886 kubeadm.go:318] [bootstrap-token] Using token: d0sojn.85e0b0ka2cwunoiy
	I1026 08:33:32.682721  297886 out.go:252]   - Configuring RBAC rules ...
	I1026 08:33:32.682867  297886 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 08:33:32.686185  297886 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 08:33:32.691584  297886 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 08:33:32.694062  297886 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 08:33:32.697506  297886 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 08:33:32.699806  297886 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 08:33:33.043809  297886 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 08:33:33.461228  297886 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 08:33:34.044020  297886 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 08:33:34.044868  297886 kubeadm.go:318] 
	I1026 08:33:34.044981  297886 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 08:33:34.045010  297886 kubeadm.go:318] 
	I1026 08:33:34.045114  297886 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 08:33:34.045123  297886 kubeadm.go:318] 
	I1026 08:33:34.045173  297886 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 08:33:34.045297  297886 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 08:33:34.045362  297886 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 08:33:34.045389  297886 kubeadm.go:318] 
	I1026 08:33:34.045482  297886 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 08:33:34.045501  297886 kubeadm.go:318] 
	I1026 08:33:34.045570  297886 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 08:33:34.045578  297886 kubeadm.go:318] 
	I1026 08:33:34.045639  297886 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 08:33:34.045734  297886 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 08:33:34.045821  297886 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 08:33:34.045833  297886 kubeadm.go:318] 
	I1026 08:33:34.045931  297886 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 08:33:34.046023  297886 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 08:33:34.046032  297886 kubeadm.go:318] 
	I1026 08:33:34.046147  297886 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token d0sojn.85e0b0ka2cwunoiy \
	I1026 08:33:34.046341  297886 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d \
	I1026 08:33:34.046381  297886 kubeadm.go:318] 	--control-plane 
	I1026 08:33:34.046392  297886 kubeadm.go:318] 
	I1026 08:33:34.046496  297886 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 08:33:34.046504  297886 kubeadm.go:318] 
	I1026 08:33:34.046596  297886 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token d0sojn.85e0b0ka2cwunoiy \
	I1026 08:33:34.046736  297886 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d 
	I1026 08:33:34.049671  297886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 08:33:34.049818  297886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 08:33:34.049853  297886 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1026 08:33:34.052716  297886 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1026 08:33:31.503752  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:31.503786  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:31.503798  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:31.503821  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:31.503833  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:31.503842  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:31.503847  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:31.503853  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:31.503859  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:31.503868  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:31.503887  290986 retry.go:31] will retry after 1.994361195s: missing components: kube-dns
	I1026 08:33:33.503528  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:33.503557  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:33.503565  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:33.503573  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:33.503577  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:33.503581  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:33.503585  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:33.503588  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:33.503592  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:33.503596  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:33.503609  290986 retry.go:31] will retry after 2.083828012s: missing components: kube-dns
	I1026 08:33:34.054031  297886 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 08:33:34.054077  297886 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1026 08:33:34.058176  297886 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1026 08:33:34.058201  297886 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1026 08:33:34.077102  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 08:33:34.392471  297886 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 08:33:34.392576  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:34.392607  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-110992 minikube.k8s.io/updated_at=2025_10_26T08_33_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=custom-flannel-110992 minikube.k8s.io/primary=true
	I1026 08:33:34.402356  297886 ops.go:34] apiserver oom_adj: -16
	I1026 08:33:34.466388  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:34.967214  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:35.467464  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:35.967119  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:36.467466  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:36.967470  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:37.466533  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:37.967429  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:38.467177  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:38.967276  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:39.035204  297886 kubeadm.go:1113] duration metric: took 4.642698439s to wait for elevateKubeSystemPrivileges
	I1026 08:33:39.035242  297886 kubeadm.go:402] duration metric: took 18.043610807s to StartCluster
	I1026 08:33:39.035290  297886 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:39.035375  297886 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:33:39.037310  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:39.037544  297886 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:33:39.037559  297886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 08:33:39.037594  297886 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:33:39.037691  297886 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-110992"
	I1026 08:33:39.037699  297886 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-110992"
	I1026 08:33:39.037714  297886 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-110992"
	I1026 08:33:39.037724  297886 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-110992"
	I1026 08:33:39.037748  297886 host.go:66] Checking if "custom-flannel-110992" exists ...
	I1026 08:33:39.037788  297886 config.go:182] Loaded profile config "custom-flannel-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:39.038158  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Status}}
	I1026 08:33:39.038380  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Status}}
	I1026 08:33:39.039224  297886 out.go:179] * Verifying Kubernetes components...
	I1026 08:33:39.040672  297886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:33:39.063422  297886 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:33:39.065318  297886 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-110992"
	I1026 08:33:39.065365  297886 host.go:66] Checking if "custom-flannel-110992" exists ...
	I1026 08:33:39.065799  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Status}}
	I1026 08:33:39.066799  297886 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:33:39.066828  297886 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:33:39.066881  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:39.094849  297886 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:33:39.094877  297886 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:33:39.094947  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:39.109131  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:39.131325  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:39.198800  297886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 08:33:39.236296  297886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:33:39.266297  297886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:33:39.275954  297886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:33:39.372872  297886 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1026 08:33:39.374220  297886 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-110992" to be "Ready" ...
	I1026 08:33:39.579998  297886 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 08:33:35.592120  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:35.592155  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:35.592167  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:35.592178  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:35.592184  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:35.592190  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:35.592195  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:35.592203  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:35.592206  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:35.592209  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:35.592223  290986 retry.go:31] will retry after 2.491065819s: missing components: kube-dns
	I1026 08:33:38.090037  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:38.090080  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:38.090093  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:38.090105  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:38.090112  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:38.090119  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:38.090126  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:38.090140  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:38.090145  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:38.090151  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:38.090206  290986 retry.go:31] will retry after 4.485660087s: missing components: kube-dns
	I1026 08:33:39.581430  297886 addons.go:514] duration metric: took 543.833117ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 08:33:39.877320  297886 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-110992" context rescaled to 1 replicas
	
	
	==> CRI-O <==
	Oct 26 08:33:08 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:08.168706517Z" level=info msg="Started container" PID=1742 containerID=0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh/dashboard-metrics-scraper id=53337e31-d48f-4558-8e35-aea254d4e217 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8aac7ddd73687d9d220453227c3218a3b00f6b090b87c5750a57a24eb2c7e75
	Oct 26 08:33:08 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:08.248146176Z" level=info msg="Removing container: 2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605" id=904a2bd2-d1a9-4d43-afec-a9139b3ebc3a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:33:08 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:08.258141333Z" level=info msg="Removed container 2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh/dashboard-metrics-scraper" id=904a2bd2-d1a9-4d43-afec-a9139b3ebc3a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.280387452Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d41d784b-ffac-49c7-84bc-10db4451ca41 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.281414676Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=872bd60f-c5df-4597-894b-f5e3d32800e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.282527614Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=65e42fc1-9165-4b20-ace7-552d38e7babf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.282661626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.287203582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.28740396Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/844a1d364e62b964561f9b90a385f91bfa2b5f7ad3658eb2ed6cbdca5369801c/merged/etc/passwd: no such file or directory"
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.287439753Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/844a1d364e62b964561f9b90a385f91bfa2b5f7ad3658eb2ed6cbdca5369801c/merged/etc/group: no such file or directory"
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.287689057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.319521908Z" level=info msg="Created container 39400809d8a6cff3435ab7c9a9b30fec1761d9d6fd7481ca9c6efb4ba004e297: kube-system/storage-provisioner/storage-provisioner" id=65e42fc1-9165-4b20-ace7-552d38e7babf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.320221894Z" level=info msg="Starting container: 39400809d8a6cff3435ab7c9a9b30fec1761d9d6fd7481ca9c6efb4ba004e297" id=2fbde231-157b-4d97-aa09-598cb6487a7e name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.322345082Z" level=info msg="Started container" PID=1756 containerID=39400809d8a6cff3435ab7c9a9b30fec1761d9d6fd7481ca9c6efb4ba004e297 description=kube-system/storage-provisioner/storage-provisioner id=2fbde231-157b-4d97-aa09-598cb6487a7e name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e0500a55d16f3240e1530d778381b1ce7b563d4e2b3577e4026b4140ae15509
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.12453094Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d66c0621-7e22-4471-87c6-5aec11bbfcfb name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.126918145Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ce8bc235-28aa-4cdf-8105-cf3c65a2d865 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.128116424Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh/dashboard-metrics-scraper" id=3f77d080-4036-4838-8e96-16f3322718a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.128644206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.137617117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.138351704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.185826204Z" level=info msg="Created container 26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh/dashboard-metrics-scraper" id=3f77d080-4036-4838-8e96-16f3322718a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.186683291Z" level=info msg="Starting container: 26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8" id=6ddbd748-da5b-42a9-af5c-da3d385a3073 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.18904219Z" level=info msg="Started container" PID=1792 containerID=26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh/dashboard-metrics-scraper id=6ddbd748-da5b-42a9-af5c-da3d385a3073 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8aac7ddd73687d9d220453227c3218a3b00f6b090b87c5750a57a24eb2c7e75
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.321101515Z" level=info msg="Removing container: 0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438" id=0450c7a3-605d-44b5-b949-3f1bbb311940 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.331441872Z" level=info msg="Removed container 0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh/dashboard-metrics-scraper" id=0450c7a3-605d-44b5-b949-3f1bbb311940 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	26835d1b859d2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           12 seconds ago      Exited              dashboard-metrics-scraper   3                   f8aac7ddd7368       dashboard-metrics-scraper-6ffb444bf9-qshwh             kubernetes-dashboard
	39400809d8a6c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   3e0500a55d16f       storage-provisioner                                    kube-system
	13ab62cadeb68       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   44cbfdd9ae912       kubernetes-dashboard-855c9754f9-wb2rv                  kubernetes-dashboard
	00f90ace4d071       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   cea90ad51cacd       coredns-66bc5c9577-h4dk5                               kube-system
	d8ad19ac7d6a1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   c32cc8ba5eb58       busybox                                                default
	5e9a95956c5c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   3e0500a55d16f       storage-provisioner                                    kube-system
	7e8addd91064c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   bb404d54ed538       kube-proxy-m4gfc                                       kube-system
	0c1cd2bcf70ca       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   11f4e915b425c       kindnet-vr7fg                                          kube-system
	bac6e251286c0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   a09d1a459eb18       kube-apiserver-default-k8s-diff-port-866212            kube-system
	fea0de012ed14       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   ea56d5cd626c2       kube-controller-manager-default-k8s-diff-port-866212   kube-system
	2c7535c22bfef       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   1956428ee36f9       kube-scheduler-default-k8s-diff-port-866212            kube-system
	f179309133864       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   8898076bacecd       etcd-default-k8s-diff-port-866212                      kube-system
	
	
	==> coredns [00f90ace4d0713082578d2953d41522061d3d60ac732cf7c7fec764994fed345] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54114 - 16400 "HINFO IN 5970486570999536228.5468883746339438641. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04690717s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-866212
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-866212
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=default-k8s-diff-port-866212
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_31_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:31:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-866212
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:33:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:33:17 +0000   Sun, 26 Oct 2025 08:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:33:17 +0000   Sun, 26 Oct 2025 08:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:33:17 +0000   Sun, 26 Oct 2025 08:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:33:17 +0000   Sun, 26 Oct 2025 08:32:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-866212
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                35b0b8af-89ca-40c6-acd5-1ad4f6cfade6
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-h4dk5                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-866212                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-vr7fg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-866212             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-866212    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-m4gfc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-866212             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qshwh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wb2rv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s               kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-866212 event: Registered Node default-k8s-diff-port-866212 in Controller
	  Normal  NodeReady                99s                kubelet          Node default-k8s-diff-port-866212 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node default-k8s-diff-port-866212 event: Registered Node default-k8s-diff-port-866212 in Controller
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [f1793091338642d5b5aa05b444ce27113423e5b31e8531e922ed908abb8f7ed4] <==
	{"level":"warn","ts":"2025-10-26T08:32:46.056977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:46.150342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38470","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T08:32:53.834986Z","caller":"traceutil/trace.go:172","msg":"trace[1407613908] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"100.581594ms","start":"2025-10-26T08:32:53.734383Z","end":"2025-10-26T08:32:53.834965Z","steps":["trace[1407613908] 'process raft request'  (duration: 95.823431ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:54.395533Z","caller":"traceutil/trace.go:172","msg":"trace[384222494] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"133.747202ms","start":"2025-10-26T08:32:54.261757Z","end":"2025-10-26T08:32:54.395505Z","steps":["trace[384222494] 'process raft request'  (duration: 119.559134ms)","trace[384222494] 'compare'  (duration: 13.951784ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:32:54.673452Z","caller":"traceutil/trace.go:172","msg":"trace[2145136223] transaction","detail":"{read_only:false; response_revision:528; number_of_response:1; }","duration":"148.083117ms","start":"2025-10-26T08:32:54.525344Z","end":"2025-10-26T08:32:54.673427Z","steps":["trace[2145136223] 'process raft request'  (duration: 124.891463ms)","trace[2145136223] 'compare'  (duration: 23.080146ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:32:54.807648Z","caller":"traceutil/trace.go:172","msg":"trace[337547233] transaction","detail":"{read_only:false; response_revision:529; number_of_response:1; }","duration":"129.274979ms","start":"2025-10-26T08:32:54.678353Z","end":"2025-10-26T08:32:54.807628Z","steps":["trace[337547233] 'process raft request'  (duration: 122.996075ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:54.953909Z","caller":"traceutil/trace.go:172","msg":"trace[1335758568] linearizableReadLoop","detail":"{readStateIndex:555; appliedIndex:555; }","duration":"123.357278ms","start":"2025-10-26T08:32:54.830529Z","end":"2025-10-26T08:32:54.953887Z","steps":["trace[1335758568] 'read index received'  (duration: 123.347261ms)","trace[1335758568] 'applied index is now lower than readState.Index'  (duration: 8.941µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:54.976562Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.008597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-h4dk5\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-10-26T08:32:54.976611Z","caller":"traceutil/trace.go:172","msg":"trace[1398264768] transaction","detail":"{read_only:false; response_revision:530; number_of_response:1; }","duration":"163.989049ms","start":"2025-10-26T08:32:54.812601Z","end":"2025-10-26T08:32:54.976590Z","steps":["trace[1398264768] 'process raft request'  (duration: 141.333406ms)","trace[1398264768] 'compare'  (duration: 22.521232ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:32:54.976652Z","caller":"traceutil/trace.go:172","msg":"trace[1615356878] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-h4dk5; range_end:; response_count:1; response_revision:529; }","duration":"146.116947ms","start":"2025-10-26T08:32:54.830516Z","end":"2025-10-26T08:32:54.976633Z","steps":["trace[1615356878] 'agreement among raft nodes before linearized reading'  (duration: 123.450054ms)","trace[1615356878] 'range keys from in-memory index tree'  (duration: 22.448588ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:32:55.116313Z","caller":"traceutil/trace.go:172","msg":"trace[1501113392] transaction","detail":"{read_only:false; response_revision:531; number_of_response:1; }","duration":"134.586941ms","start":"2025-10-26T08:32:54.981701Z","end":"2025-10-26T08:32:55.116288Z","steps":["trace[1501113392] 'process raft request'  (duration: 126.304001ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T08:32:55.373244Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.89849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" limit:1 ","response":"range_response_count:1 size:4622"}
	{"level":"info","ts":"2025-10-26T08:32:55.373342Z","caller":"traceutil/trace.go:172","msg":"trace[2117944040] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh; range_end:; response_count:1; response_revision:531; }","duration":"165.001088ms","start":"2025-10-26T08:32:55.208319Z","end":"2025-10-26T08:32:55.373320Z","steps":["trace[2117944040] 'agreement among raft nodes before linearized reading'  (duration: 31.643016ms)","trace[2117944040] 'range keys from in-memory index tree'  (duration: 133.153088ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:55.373839Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.30513ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765741983537279 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-866212.1871fd6ab9cbbdf5\" mod_revision:529 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-866212.1871fd6ab9cbbdf5\" value_size:690 lease:6571765741983537150 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-866212.1871fd6ab9cbbdf5\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T08:32:55.374032Z","caller":"traceutil/trace.go:172","msg":"trace[1033819592] transaction","detail":"{read_only:false; response_revision:532; number_of_response:1; }","duration":"252.903905ms","start":"2025-10-26T08:32:55.121110Z","end":"2025-10-26T08:32:55.374014Z","steps":["trace[1033819592] 'process raft request'  (duration: 118.867623ms)","trace[1033819592] 'compare'  (duration: 133.184913ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:55.704337Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.874059ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765741983537285 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" mod_revision:521 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" value_size:4630 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T08:32:55.704586Z","caller":"traceutil/trace.go:172","msg":"trace[209732539] transaction","detail":"{read_only:false; response_revision:535; number_of_response:1; }","duration":"227.459875ms","start":"2025-10-26T08:32:55.477110Z","end":"2025-10-26T08:32:55.704570Z","steps":["trace[209732539] 'process raft request'  (duration: 227.390868ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:55.704612Z","caller":"traceutil/trace.go:172","msg":"trace[956732398] transaction","detail":"{read_only:false; response_revision:534; number_of_response:1; }","duration":"322.978002ms","start":"2025-10-26T08:32:55.381614Z","end":"2025-10-26T08:32:55.704592Z","steps":["trace[956732398] 'process raft request'  (duration: 196.759946ms)","trace[956732398] 'compare'  (duration: 125.764303ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:55.704736Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T08:32:55.381594Z","time spent":"323.071424ms","remote":"127.0.0.1:37710","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4716,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" mod_revision:521 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" value_size:4630 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" > >"}
	{"level":"warn","ts":"2025-10-26T08:32:56.042413Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"211.813292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-h4dk5\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-10-26T08:32:56.042483Z","caller":"traceutil/trace.go:172","msg":"trace[1984375349] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-h4dk5; range_end:; response_count:1; response_revision:538; }","duration":"211.893407ms","start":"2025-10-26T08:32:55.830574Z","end":"2025-10-26T08:32:56.042467Z","steps":["trace[1984375349] 'agreement among raft nodes before linearized reading'  (duration: 79.739368ms)","trace[1984375349] 'range keys from in-memory index tree'  (duration: 131.940805ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:56.042511Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.059214ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765741983537290 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/etcd-default-k8s-diff-port-866212.1871fd6ada38ed9e\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/etcd-default-k8s-diff-port-866212.1871fd6ada38ed9e\" value_size:680 lease:6571765741983537150 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T08:32:56.042577Z","caller":"traceutil/trace.go:172","msg":"trace[15110038] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"260.00415ms","start":"2025-10-26T08:32:55.782561Z","end":"2025-10-26T08:32:56.042565Z","steps":["trace[15110038] 'process raft request'  (duration: 127.804942ms)","trace[15110038] 'compare'  (duration: 131.811814ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:33:15.555005Z","caller":"traceutil/trace.go:172","msg":"trace[725905225] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"106.565383ms","start":"2025-10-26T08:33:15.448419Z","end":"2025-10-26T08:33:15.554984Z","steps":["trace[725905225] 'process raft request'  (duration: 106.410711ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:33:15.681002Z","caller":"traceutil/trace.go:172","msg":"trace[166051861] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"229.324722ms","start":"2025-10-26T08:33:15.451654Z","end":"2025-10-26T08:33:15.680979Z","steps":["trace[166051861] 'process raft request'  (duration: 146.532613ms)","trace[166051861] 'compare'  (duration: 82.477977ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:33:44 up  1:16,  0 user,  load average: 6.89, 4.75, 2.77
	Linux default-k8s-diff-port-866212 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0c1cd2bcf70ca230d2e4cb79ce891591e75eaf36dc70ff2f6a1c60c061b036e1] <==
	I1026 08:32:47.752833       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:32:47.753204       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1026 08:32:47.753416       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:32:47.753439       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:32:47.753465       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:32:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:32:47.955688       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:32:47.955727       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:32:47.955741       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:32:47.956465       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:32:48.278759       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:32:48.278796       1 metrics.go:72] Registering metrics
	I1026 08:32:48.278890       1 controller.go:711] "Syncing nftables rules"
	I1026 08:32:57.956162       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:32:57.956223       1 main.go:301] handling current node
	I1026 08:33:07.957451       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:33:07.957494       1 main.go:301] handling current node
	I1026 08:33:17.956436       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:33:17.956471       1 main.go:301] handling current node
	I1026 08:33:27.955967       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:33:27.956079       1 main.go:301] handling current node
	I1026 08:33:37.957871       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:33:37.957918       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bac6e251286c0426a8d66c24d98eec9378377f39d55baba7bda5c9b9d7aa2fdd] <==
	I1026 08:32:46.735834       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 08:32:46.735947       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:32:46.737289       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 08:32:46.737329       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 08:32:46.737465       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 08:32:46.737646       1 aggregator.go:171] initial CRD sync complete...
	I1026 08:32:46.738285       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 08:32:46.738456       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 08:32:46.738465       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:32:46.741706       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 08:32:46.741735       1 policy_source.go:240] refreshing policies
	E1026 08:32:46.743518       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 08:32:46.765882       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:32:46.771925       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 08:32:47.068567       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 08:32:47.101537       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:32:47.127114       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:32:47.134743       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:32:47.142534       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:32:47.187670       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.123.141"}
	I1026 08:32:47.200481       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.28.151"}
	I1026 08:32:47.635202       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:32:50.556299       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:32:50.651747       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:32:50.703498       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [fea0de012ed14198cce29294a9f8a6de6b56997c95421d8dbd5059a83bc10c30] <==
	I1026 08:32:50.067748       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 08:32:50.073142       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:32:50.073161       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 08:32:50.073167       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 08:32:50.076343       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 08:32:50.099087       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 08:32:50.100361       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 08:32:50.100406       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 08:32:50.100627       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 08:32:50.100652       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 08:32:50.101707       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 08:32:50.101728       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 08:32:50.101752       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 08:32:50.101835       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 08:32:50.102157       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 08:32:50.106419       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:32:50.108564       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:32:50.117751       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 08:32:50.121094       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:32:50.123288       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 08:32:50.124502       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 08:32:50.127798       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 08:32:50.129069       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:32:50.131180       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 08:32:50.661202       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [7e8addd91064c0bf781cb95b46604edaa687aeffe8855673b88feb7b30405028] <==
	I1026 08:32:47.516278       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:32:47.583441       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:32:47.684475       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:32:47.684527       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1026 08:32:47.684638       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:32:47.707631       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:32:47.707691       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:32:47.713975       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:32:47.714375       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:32:47.714411       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:32:47.716092       1 config.go:309] "Starting node config controller"
	I1026 08:32:47.716108       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:32:47.716239       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:32:47.716282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:32:47.716374       1 config.go:200] "Starting service config controller"
	I1026 08:32:47.716381       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:32:47.716397       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:32:47.716403       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:32:47.816651       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:32:47.816678       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 08:32:47.816694       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:32:47.816710       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2c7535c22bfefd57d71740479f1db737373736089d752091b7f4c168c93f52e2] <==
	I1026 08:32:45.717603       1 serving.go:386] Generated self-signed cert in-memory
	W1026 08:32:46.656033       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 08:32:46.656072       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 08:32:46.656089       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 08:32:46.656098       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 08:32:46.700409       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 08:32:46.700499       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:32:46.703330       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:32:46.703389       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:32:46.703753       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 08:32:46.703785       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:32:46.803744       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:32:55 default-k8s-diff-port-866212 kubelet[716]: I1026 08:32:55.206628     716 scope.go:117] "RemoveContainer" containerID="2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605"
	Oct 26 08:32:55 default-k8s-diff-port-866212 kubelet[716]: E1026 08:32:55.206870     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:32:56 default-k8s-diff-port-866212 kubelet[716]: I1026 08:32:56.002309     716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 08:32:56 default-k8s-diff-port-866212 kubelet[716]: I1026 08:32:56.212764     716 scope.go:117] "RemoveContainer" containerID="2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605"
	Oct 26 08:32:56 default-k8s-diff-port-866212 kubelet[716]: E1026 08:32:56.213487     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:32:57 default-k8s-diff-port-866212 kubelet[716]: I1026 08:32:57.215049     716 scope.go:117] "RemoveContainer" containerID="2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605"
	Oct 26 08:32:57 default-k8s-diff-port-866212 kubelet[716]: E1026 08:32:57.215569     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:33:01 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:01.703387     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wb2rv" podStartSLOduration=4.619975629 podStartE2EDuration="11.703366841s" podCreationTimestamp="2025-10-26 08:32:50 +0000 UTC" firstStartedPulling="2025-10-26 08:32:50.962280941 +0000 UTC m=+6.954672640" lastFinishedPulling="2025-10-26 08:32:58.045672155 +0000 UTC m=+14.038063852" observedRunningTime="2025-10-26 08:32:58.232645485 +0000 UTC m=+14.225037201" watchObservedRunningTime="2025-10-26 08:33:01.703366841 +0000 UTC m=+17.695758556"
	Oct 26 08:33:08 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:08.123289     716 scope.go:117] "RemoveContainer" containerID="2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605"
	Oct 26 08:33:08 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:08.246688     716 scope.go:117] "RemoveContainer" containerID="2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605"
	Oct 26 08:33:08 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:08.246894     716 scope.go:117] "RemoveContainer" containerID="0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438"
	Oct 26 08:33:08 default-k8s-diff-port-866212 kubelet[716]: E1026 08:33:08.247081     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:33:15 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:15.445205     716 scope.go:117] "RemoveContainer" containerID="0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438"
	Oct 26 08:33:15 default-k8s-diff-port-866212 kubelet[716]: E1026 08:33:15.445535     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:33:18 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:18.279928     716 scope.go:117] "RemoveContainer" containerID="5e9a95956c5c17dd1f03f2dbf5ceb7ebd79ac63c5243e8c40cb8511e2e4b6696"
	Oct 26 08:33:31 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:31.123819     716 scope.go:117] "RemoveContainer" containerID="0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438"
	Oct 26 08:33:31 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:31.319722     716 scope.go:117] "RemoveContainer" containerID="0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438"
	Oct 26 08:33:31 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:31.319931     716 scope.go:117] "RemoveContainer" containerID="26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8"
	Oct 26 08:33:31 default-k8s-diff-port-866212 kubelet[716]: E1026 08:33:31.320164     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:33:35 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:35.444957     716 scope.go:117] "RemoveContainer" containerID="26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8"
	Oct 26 08:33:35 default-k8s-diff-port-866212 kubelet[716]: E1026 08:33:35.445216     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:33:41 default-k8s-diff-port-866212 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 08:33:41 default-k8s-diff-port-866212 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 08:33:41 default-k8s-diff-port-866212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 08:33:41 default-k8s-diff-port-866212 systemd[1]: kubelet.service: Consumed 1.915s CPU time.
	
	
	==> kubernetes-dashboard [13ab62cadeb682750cf6f3a123c69691223f42268b8d1a98b2bc848057e8445b] <==
	2025/10/26 08:32:58 Using namespace: kubernetes-dashboard
	2025/10/26 08:32:58 Using in-cluster config to connect to apiserver
	2025/10/26 08:32:58 Using secret token for csrf signing
	2025/10/26 08:32:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 08:32:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 08:32:58 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 08:32:58 Generating JWE encryption key
	2025/10/26 08:32:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 08:32:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 08:32:58 Initializing JWE encryption key from synchronized object
	2025/10/26 08:32:58 Creating in-cluster Sidecar client
	2025/10/26 08:32:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:32:58 Serving insecurely on HTTP port: 9090
	2025/10/26 08:33:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:32:58 Starting overwatch
	
	
	==> storage-provisioner [39400809d8a6cff3435ab7c9a9b30fec1761d9d6fd7481ca9c6efb4ba004e297] <==
	I1026 08:33:18.336901       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1026 08:33:18.344981       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 08:33:18.345038       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 08:33:18.347340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:21.802768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:26.068035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:29.665842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:32.719757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:35.742333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:35.746916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:33:35.747086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 08:33:35.747152       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46264170-5b73-4301-a763-5e3adc5f609e", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-866212_9639eabe-d358-4eab-8742-50c1661cd756 became leader
	I1026 08:33:35.747265       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-866212_9639eabe-d358-4eab-8742-50c1661cd756!
	W1026 08:33:35.749664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:35.753386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:33:35.848180       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-866212_9639eabe-d358-4eab-8742-50c1661cd756!
	W1026 08:33:37.756939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:37.763137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:39.767637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:39.773997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:41.776907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:41.780728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:43.783410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:43.787345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [5e9a95956c5c17dd1f03f2dbf5ceb7ebd79ac63c5243e8c40cb8511e2e4b6696] <==
	I1026 08:32:47.484863       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 08:33:17.486710       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212: exit status 2 (331.868859ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-866212 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-866212
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-866212:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed",
	        "Created": "2025-10-26T08:31:33.082391712Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 286036,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-10-26T08:32:36.66934556Z",
	            "FinishedAt": "2025-10-26T08:32:35.451370424Z"
	        },
	        "Image": "sha256:a1caeebaf98ed0136731e905a1e086f77985a42c2ebb5a7e0b3d0bd7fcbe10cc",
	        "ResolvConfPath": "/var/lib/docker/containers/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed/hostname",
	        "HostsPath": "/var/lib/docker/containers/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed/hosts",
	        "LogPath": "/var/lib/docker/containers/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed/9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed-json.log",
	        "Name": "/default-k8s-diff-port-866212",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-866212:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-866212",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9325d9bcbadd396c9e988cd96d7cb3c148df1b6e64c9478782ba43a6a4e48bed",
	                "LowerDir": "/var/lib/docker/overlay2/3ad3b1c0441a6dfe7d983bd846075d170734c32b25f3dbb10f22d7149ddb85fe-init/diff:/var/lib/docker/overlay2/4dbc674758215aa284e45739a05b8bdb0c8d934ef742a54a140d299c1f29df29/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3ad3b1c0441a6dfe7d983bd846075d170734c32b25f3dbb10f22d7149ddb85fe/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3ad3b1c0441a6dfe7d983bd846075d170734c32b25f3dbb10f22d7149ddb85fe/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3ad3b1c0441a6dfe7d983bd846075d170734c32b25f3dbb10f22d7149ddb85fe/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-866212",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-866212/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-866212",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-866212",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-866212",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0e9fc23710118247b3b6bbc3cf45f610ac1a8cd88cb60c13cb8ea05131bf603d",
	            "SandboxKey": "/var/run/docker/netns/0e9fc2371011",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-866212": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ba:23:ed:02:ad:ee",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b6895eb84e54294e7e4b0c2ef3aabe968c7a2cc155d3fbec01d47d6ad909fa85",
	                    "EndpointID": "38cbaf4944062491b328c3315019749f54684faeab89ff3b7a0b396025d6d07c",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-866212",
	                        "9325d9bcbadd"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212: exit status 2 (350.315694ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-866212 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-866212 logs -n 25: (1.19402901s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-110992 sudo cat /etc/kubernetes/kubelet.conf                                                                                                               │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo cat /var/lib/kubelet/config.yaml                                                                                                               │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo systemctl status docker --all --full --no-pager                                                                                                │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p auto-110992 sudo systemctl cat docker --no-pager                                                                                                                │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo cat /etc/docker/daemon.json                                                                                                                    │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p auto-110992 sudo docker system info                                                                                                                             │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p auto-110992 sudo systemctl status cri-docker --all --full --no-pager                                                                                            │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p auto-110992 sudo systemctl cat cri-docker --no-pager                                                                                                            │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p auto-110992 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo cri-dockerd --version                                                                                                                          │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p auto-110992 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo containerd config dump                                                                                                                         │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ ssh     │ -p auto-110992 sudo crio config                                                                                                                                    │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ delete  │ -p auto-110992                                                                                                                                                     │ auto-110992                  │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ start   │ -p custom-flannel-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-110992        │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	│ ssh     │ -p kindnet-110992 pgrep -a kubelet                                                                                                                                 │ kindnet-110992               │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ image   │ default-k8s-diff-port-866212 image list --format=json                                                                                                              │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │ 26 Oct 25 08:33 UTC │
	│ pause   │ -p default-k8s-diff-port-866212 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-866212 │ jenkins │ v1.37.0 │ 26 Oct 25 08:33 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 08:33:10
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 08:33:10.159627  297886 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:33:10.159870  297886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:33:10.159875  297886 out.go:374] Setting ErrFile to fd 2...
	I1026 08:33:10.159879  297886 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:33:10.160178  297886 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:33:10.160848  297886 out.go:368] Setting JSON to false
	I1026 08:33:10.162396  297886 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4541,"bootTime":1761463049,"procs":339,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:33:10.162507  297886 start.go:141] virtualization: kvm guest
	I1026 08:33:10.165271  297886 out.go:179] * [custom-flannel-110992] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:33:10.166596  297886 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:33:10.166615  297886 notify.go:220] Checking for updates...
	I1026 08:33:10.170244  297886 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:33:10.171584  297886 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:33:10.173077  297886 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:33:10.174451  297886 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:33:10.177163  297886 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:33:10.180549  297886 config.go:182] Loaded profile config "calico-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:10.180719  297886 config.go:182] Loaded profile config "default-k8s-diff-port-866212": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:10.180839  297886 config.go:182] Loaded profile config "kindnet-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:10.180955  297886 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:33:10.210885  297886 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:33:10.211064  297886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:33:10.287366  297886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:33:10.273883361 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:33:10.287520  297886 docker.go:318] overlay module found
	I1026 08:33:10.290226  297886 out.go:179] * Using the docker driver based on user configuration
	I1026 08:33:10.291314  297886 start.go:305] selected driver: docker
	I1026 08:33:10.291337  297886 start.go:925] validating driver "docker" against <nil>
	I1026 08:33:10.291350  297886 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:33:10.292015  297886 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:33:10.373449  297886 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:33:10.361573467 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:33:10.373652  297886 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 08:33:10.373917  297886 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:33:10.375717  297886 out.go:179] * Using Docker driver with root privileges
	I1026 08:33:10.377025  297886 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1026 08:33:10.377061  297886 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1026 08:33:10.377189  297886 start.go:349] cluster config:
	{Name:custom-flannel-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:33:10.378619  297886 out.go:179] * Starting "custom-flannel-110992" primary control-plane node in "custom-flannel-110992" cluster
	I1026 08:33:10.379645  297886 cache.go:123] Beginning downloading kic base image for docker with crio
	I1026 08:33:10.381064  297886 out.go:179] * Pulling base image v0.0.48-1760939008-21773 ...
	I1026 08:33:10.382259  297886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:33:10.382303  297886 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1026 08:33:10.382313  297886 cache.go:58] Caching tarball of preloaded images
	I1026 08:33:10.382374  297886 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon
	I1026 08:33:10.382446  297886 preload.go:233] Found /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1026 08:33:10.382458  297886 cache.go:61] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1026 08:33:10.382563  297886 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/config.json ...
	I1026 08:33:10.382609  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/config.json: {Name:mke895c09b3c6d49dc9defb8c0e51e5fd7bf07e2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:10.405519  297886 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 in local docker daemon, skipping pull
	I1026 08:33:10.405543  297886 cache.go:147] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 exists in daemon, skipping load
	I1026 08:33:10.405562  297886 cache.go:232] Successfully downloaded all kic artifacts
	I1026 08:33:10.405596  297886 start.go:360] acquireMachinesLock for custom-flannel-110992: {Name:mk74c01f25a96369b104449921ef5549b38c2999 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1026 08:33:10.405703  297886 start.go:364] duration metric: took 89.651µs to acquireMachinesLock for "custom-flannel-110992"
	I1026 08:33:10.405726  297886 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-110992 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:33:10.405802  297886 start.go:125] createHost starting for "" (driver="docker")
	W1026 08:33:06.335336  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:08.834472  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:10.835040  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:06.852730  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	W1026 08:33:09.351972  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	W1026 08:33:11.352404  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	I1026 08:33:11.136223  290986 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 1.975931s
	I1026 08:33:11.575689  290986 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 2.415391768s
	I1026 08:33:13.162200  290986 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.00197081s
	I1026 08:33:13.176216  290986 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:33:13.190513  290986 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:33:13.202329  290986 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:33:13.202630  290986 kubeadm.go:318] [mark-control-plane] Marking the node calico-110992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:33:13.211377  290986 kubeadm.go:318] [bootstrap-token] Using token: rlvwx1.6bndmtspzcvif1xf
	I1026 08:33:13.212793  290986 out.go:252]   - Configuring RBAC rules ...
	I1026 08:33:13.212981  290986 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 08:33:13.217115  290986 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 08:33:13.223211  290986 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 08:33:13.227034  290986 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 08:33:13.229865  290986 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 08:33:13.232818  290986 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 08:33:13.616074  290986 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 08:33:14.638629  290986 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 08:33:10.408485  297886 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1026 08:33:10.408746  297886 start.go:159] libmachine.API.Create for "custom-flannel-110992" (driver="docker")
	I1026 08:33:10.408789  297886 client.go:168] LocalClient.Create starting
	I1026 08:33:10.408858  297886 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem
	I1026 08:33:10.408900  297886 main.go:141] libmachine: Decoding PEM data...
	I1026 08:33:10.408920  297886 main.go:141] libmachine: Parsing certificate...
	I1026 08:33:10.408998  297886 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem
	I1026 08:33:10.409025  297886 main.go:141] libmachine: Decoding PEM data...
	I1026 08:33:10.409038  297886 main.go:141] libmachine: Parsing certificate...
	I1026 08:33:10.409451  297886 cli_runner.go:164] Run: docker network inspect custom-flannel-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1026 08:33:10.430570  297886 cli_runner.go:211] docker network inspect custom-flannel-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1026 08:33:10.430672  297886 network_create.go:284] running [docker network inspect custom-flannel-110992] to gather additional debugging logs...
	I1026 08:33:10.430695  297886 cli_runner.go:164] Run: docker network inspect custom-flannel-110992
	W1026 08:33:10.452218  297886 cli_runner.go:211] docker network inspect custom-flannel-110992 returned with exit code 1
	I1026 08:33:10.452259  297886 network_create.go:287] error running [docker network inspect custom-flannel-110992]: docker network inspect custom-flannel-110992: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-110992 not found
	I1026 08:33:10.452285  297886 network_create.go:289] output of [docker network inspect custom-flannel-110992]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-110992 not found
	
	** /stderr **
	I1026 08:33:10.452410  297886 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:33:10.474976  297886 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c18b67b7e42d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:70:41:72:e4:6d} reservation:<nil>}
	I1026 08:33:10.475876  297886 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-dd6ed9f615a5 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:4e:78:96:65:8c:60} reservation:<nil>}
	I1026 08:33:10.476795  297886 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-f2a983bf4577 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:8e:62:ae:31:43:82} reservation:<nil>}
	I1026 08:33:10.477822  297886 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e95bb0}
	I1026 08:33:10.477853  297886 network_create.go:124] attempt to create docker network custom-flannel-110992 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1026 08:33:10.477913  297886 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-110992 custom-flannel-110992
	I1026 08:33:10.549346  297886 network_create.go:108] docker network custom-flannel-110992 192.168.76.0/24 created
	I1026 08:33:10.549385  297886 kic.go:121] calculated static IP "192.168.76.2" for the "custom-flannel-110992" container
	I1026 08:33:10.549457  297886 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1026 08:33:10.571667  297886 cli_runner.go:164] Run: docker volume create custom-flannel-110992 --label name.minikube.sigs.k8s.io=custom-flannel-110992 --label created_by.minikube.sigs.k8s.io=true
	I1026 08:33:10.595077  297886 oci.go:103] Successfully created a docker volume custom-flannel-110992
	I1026 08:33:10.595160  297886 cli_runner.go:164] Run: docker run --rm --name custom-flannel-110992-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-110992 --entrypoint /usr/bin/test -v custom-flannel-110992:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -d /var/lib
	I1026 08:33:11.098483  297886 oci.go:107] Successfully prepared a docker volume custom-flannel-110992
	I1026 08:33:11.098532  297886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:33:11.098557  297886 kic.go:194] Starting extracting preloaded images to volume ...
	I1026 08:33:11.098627  297886 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-110992:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir
	I1026 08:33:15.349082  290986 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 08:33:15.350319  290986 kubeadm.go:318] 
	I1026 08:33:15.350413  290986 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 08:33:15.350431  290986 kubeadm.go:318] 
	I1026 08:33:15.350548  290986 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 08:33:15.350570  290986 kubeadm.go:318] 
	I1026 08:33:15.350603  290986 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 08:33:15.350707  290986 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 08:33:15.350801  290986 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 08:33:15.350811  290986 kubeadm.go:318] 
	I1026 08:33:15.350884  290986 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 08:33:15.350894  290986 kubeadm.go:318] 
	I1026 08:33:15.350968  290986 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 08:33:15.350987  290986 kubeadm.go:318] 
	I1026 08:33:15.351065  290986 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 08:33:15.351178  290986 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 08:33:15.351287  290986 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 08:33:15.351297  290986 kubeadm.go:318] 
	I1026 08:33:15.351396  290986 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 08:33:15.351525  290986 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 08:33:15.351554  290986 kubeadm.go:318] 
	I1026 08:33:15.351689  290986 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token rlvwx1.6bndmtspzcvif1xf \
	I1026 08:33:15.351837  290986 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d \
	I1026 08:33:15.351871  290986 kubeadm.go:318] 	--control-plane 
	I1026 08:33:15.351882  290986 kubeadm.go:318] 
	I1026 08:33:15.351989  290986 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 08:33:15.352001  290986 kubeadm.go:318] 
	I1026 08:33:15.352100  290986 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token rlvwx1.6bndmtspzcvif1xf \
	I1026 08:33:15.352234  290986 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d 
	I1026 08:33:15.355117  290986 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 08:33:15.355316  290986 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 08:33:15.355335  290986 cni.go:84] Creating CNI manager for "calico"
	I1026 08:33:15.385852  290986 out.go:179] * Configuring Calico (Container Networking Interface) ...
	W1026 08:33:13.334138  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:15.834602  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:13.852546  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	W1026 08:33:16.364526  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	I1026 08:33:15.705518  297886 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-110992:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 -I lz4 -xf /preloaded.tar -C /extractDir: (4.606832183s)
	I1026 08:33:15.705554  297886 kic.go:203] duration metric: took 4.606994333s to extract preloaded images to volume ...
	W1026 08:33:15.705657  297886 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1026 08:33:15.705690  297886 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1026 08:33:15.705735  297886 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1026 08:33:15.777086  297886 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname custom-flannel-110992 --name custom-flannel-110992 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-110992 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=custom-flannel-110992 --network custom-flannel-110992 --ip 192.168.76.2 --volume custom-flannel-110992:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8
	I1026 08:33:16.158091  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Running}}
	I1026 08:33:16.181688  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Status}}
	I1026 08:33:16.205977  297886 cli_runner.go:164] Run: docker exec custom-flannel-110992 stat /var/lib/dpkg/alternatives/iptables
	I1026 08:33:16.278772  297886 oci.go:144] the created container "custom-flannel-110992" has a running status.
	I1026 08:33:16.278805  297886 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa...
	I1026 08:33:16.682352  297886 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1026 08:33:16.712059  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Status}}
	I1026 08:33:16.731959  297886 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1026 08:33:16.731983  297886 kic_runner.go:114] Args: [docker exec --privileged custom-flannel-110992 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1026 08:33:16.778152  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Status}}
	I1026 08:33:16.796571  297886 machine.go:93] provisionDockerMachine start ...
	I1026 08:33:16.796681  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:16.815443  297886 main.go:141] libmachine: Using SSH client type: native
	I1026 08:33:16.815673  297886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I1026 08:33:16.815686  297886 main.go:141] libmachine: About to run SSH command:
	hostname
	I1026 08:33:16.960568  297886 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-110992
	
	I1026 08:33:16.960597  297886 ubuntu.go:182] provisioning hostname "custom-flannel-110992"
	I1026 08:33:16.960648  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:16.980837  297886 main.go:141] libmachine: Using SSH client type: native
	I1026 08:33:16.981108  297886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I1026 08:33:16.981135  297886 main.go:141] libmachine: About to run SSH command:
	sudo hostname custom-flannel-110992 && echo "custom-flannel-110992" | sudo tee /etc/hostname
	I1026 08:33:17.136695  297886 main.go:141] libmachine: SSH cmd err, output: <nil>: custom-flannel-110992
	
	I1026 08:33:17.136760  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:17.157882  297886 main.go:141] libmachine: Using SSH client type: native
	I1026 08:33:17.158186  297886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I1026 08:33:17.158226  297886 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scustom-flannel-110992' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 custom-flannel-110992/g' /etc/hosts;
				else 
					echo '127.0.1.1 custom-flannel-110992' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1026 08:33:17.304441  297886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1026 08:33:17.304472  297886 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21772-9429/.minikube CaCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21772-9429/.minikube}
	I1026 08:33:17.304500  297886 ubuntu.go:190] setting up certificates
	I1026 08:33:17.304512  297886 provision.go:84] configureAuth start
	I1026 08:33:17.304573  297886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-110992
	I1026 08:33:17.322933  297886 provision.go:143] copyHostCerts
	I1026 08:33:17.323011  297886 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem, removing ...
	I1026 08:33:17.323025  297886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem
	I1026 08:33:17.323110  297886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/ca.pem (1078 bytes)
	I1026 08:33:17.323235  297886 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem, removing ...
	I1026 08:33:17.323259  297886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem
	I1026 08:33:17.323314  297886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/cert.pem (1123 bytes)
	I1026 08:33:17.323414  297886 exec_runner.go:144] found /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem, removing ...
	I1026 08:33:17.323425  297886 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem
	I1026 08:33:17.323479  297886 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21772-9429/.minikube/key.pem (1675 bytes)
	I1026 08:33:17.323551  297886 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem org=jenkins.custom-flannel-110992 san=[127.0.0.1 192.168.76.2 custom-flannel-110992 localhost minikube]
	I1026 08:33:17.659038  297886 provision.go:177] copyRemoteCerts
	I1026 08:33:17.659097  297886 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1026 08:33:17.659136  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:17.680424  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:17.781941  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1026 08:33:17.802808  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1026 08:33:17.820888  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I1026 08:33:17.840435  297886 provision.go:87] duration metric: took 535.903369ms to configureAuth
	I1026 08:33:17.840466  297886 ubuntu.go:206] setting minikube options for container-runtime
	I1026 08:33:17.840630  297886 config.go:182] Loaded profile config "custom-flannel-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:17.840739  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:17.861881  297886 main.go:141] libmachine: Using SSH client type: native
	I1026 08:33:17.862092  297886 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33121 <nil> <nil>}
	I1026 08:33:17.862109  297886 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1026 08:33:18.121645  297886 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1026 08:33:18.121673  297886 machine.go:96] duration metric: took 1.325076637s to provisionDockerMachine
	I1026 08:33:18.121686  297886 client.go:171] duration metric: took 7.712888016s to LocalClient.Create
	I1026 08:33:18.121708  297886 start.go:167] duration metric: took 7.712962979s to libmachine.API.Create "custom-flannel-110992"
	I1026 08:33:18.121721  297886 start.go:293] postStartSetup for "custom-flannel-110992" (driver="docker")
	I1026 08:33:18.121732  297886 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1026 08:33:18.121784  297886 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1026 08:33:18.121846  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:18.143384  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:18.247728  297886 ssh_runner.go:195] Run: cat /etc/os-release
	I1026 08:33:18.251576  297886 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1026 08:33:18.251605  297886 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1026 08:33:18.251618  297886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/addons for local assets ...
	I1026 08:33:18.251673  297886 filesync.go:126] Scanning /home/jenkins/minikube-integration/21772-9429/.minikube/files for local assets ...
	I1026 08:33:18.251769  297886 filesync.go:149] local asset: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem -> 129212.pem in /etc/ssl/certs
	I1026 08:33:18.251899  297886 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1026 08:33:18.259702  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:33:18.283187  297886 start.go:296] duration metric: took 161.451039ms for postStartSetup
	I1026 08:33:18.283663  297886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-110992
	I1026 08:33:18.304304  297886 profile.go:143] Saving config to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/config.json ...
	I1026 08:33:18.304593  297886 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:33:18.304634  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:18.326126  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:18.428435  297886 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1026 08:33:18.433231  297886 start.go:128] duration metric: took 8.027417484s to createHost
	I1026 08:33:18.433269  297886 start.go:83] releasing machines lock for "custom-flannel-110992", held for 8.027556073s
	I1026 08:33:18.433350  297886 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" custom-flannel-110992
	I1026 08:33:18.452438  297886 ssh_runner.go:195] Run: cat /version.json
	I1026 08:33:18.452482  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:18.452552  297886 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1026 08:33:18.452619  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:18.471303  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:18.471511  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:18.569544  297886 ssh_runner.go:195] Run: systemctl --version
	I1026 08:33:18.629261  297886 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1026 08:33:18.676972  297886 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1026 08:33:18.683027  297886 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1026 08:33:18.683092  297886 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1026 08:33:18.716408  297886 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1026 08:33:18.716430  297886 start.go:495] detecting cgroup driver to use...
	I1026 08:33:18.716465  297886 detect.go:190] detected "systemd" cgroup driver on host os
	I1026 08:33:18.716510  297886 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1026 08:33:18.733161  297886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1026 08:33:18.745806  297886 docker.go:218] disabling cri-docker service (if available) ...
	I1026 08:33:18.745855  297886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1026 08:33:18.763344  297886 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1026 08:33:18.781323  297886 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1026 08:33:18.869284  297886 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1026 08:33:18.969523  297886 docker.go:234] disabling docker service ...
	I1026 08:33:18.969598  297886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1026 08:33:18.996066  297886 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1026 08:33:19.011388  297886 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1026 08:33:19.125072  297886 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1026 08:33:19.229067  297886 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1026 08:33:19.244169  297886 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1026 08:33:19.261359  297886 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1026 08:33:19.261418  297886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.273589  297886 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1026 08:33:19.273641  297886 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.284050  297886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.294148  297886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.304083  297886 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1026 08:33:19.312611  297886 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.322147  297886 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.337373  297886 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1026 08:33:19.346603  297886 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1026 08:33:19.354948  297886 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1026 08:33:19.362702  297886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:33:19.443108  297886 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1026 08:33:19.561078  297886 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1026 08:33:19.561133  297886 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1026 08:33:19.565383  297886 start.go:563] Will wait 60s for crictl version
	I1026 08:33:19.565438  297886 ssh_runner.go:195] Run: which crictl
	I1026 08:33:19.569168  297886 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1026 08:33:19.595993  297886 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1026 08:33:19.596075  297886 ssh_runner.go:195] Run: crio --version
	I1026 08:33:19.626325  297886 ssh_runner.go:195] Run: crio --version
	I1026 08:33:19.658625  297886 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1026 08:33:15.390234  290986 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 08:33:15.390278  290986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I1026 08:33:15.407649  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 08:33:16.488841  290986 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.081149635s)
	I1026 08:33:16.488904  290986 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 08:33:16.489267  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:16.489341  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-110992 minikube.k8s.io/updated_at=2025_10_26T08_33_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=calico-110992 minikube.k8s.io/primary=true
	I1026 08:33:16.504854  290986 ops.go:34] apiserver oom_adj: -16
	I1026 08:33:16.608100  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:17.108155  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:17.609108  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:18.108150  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:18.608980  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:19.108372  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:19.609114  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:19.661128  297886 cli_runner.go:164] Run: docker network inspect custom-flannel-110992 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1026 08:33:19.681864  297886 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1026 08:33:19.686032  297886 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:33:19.696924  297886 kubeadm.go:883] updating cluster {Name:custom-flannel-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-110992 Namespace:default APIServerHAVIP: APIServerName:miniku
beCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1026 08:33:19.697060  297886 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1026 08:33:19.697114  297886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:33:19.730351  297886 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:33:19.730377  297886 crio.go:433] Images already preloaded, skipping extraction
	I1026 08:33:19.730429  297886 ssh_runner.go:195] Run: sudo crictl images --output json
	I1026 08:33:19.758293  297886 crio.go:514] all images are preloaded for cri-o runtime.
	I1026 08:33:19.758318  297886 cache_images.go:85] Images are preloaded, skipping loading
	I1026 08:33:19.758327  297886 kubeadm.go:934] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1026 08:33:19.758421  297886 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=custom-flannel-110992 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml}
	I1026 08:33:19.758502  297886 ssh_runner.go:195] Run: crio config
	I1026 08:33:19.808371  297886 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1026 08:33:19.808417  297886 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1026 08:33:19.808447  297886 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:custom-flannel-110992 NodeName:custom-flannel-110992 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPa
th:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1026 08:33:19.808578  297886 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "custom-flannel-110992"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1026 08:33:19.808644  297886 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1026 08:33:19.819195  297886 binaries.go:44] Found k8s binaries, skipping transfer
	I1026 08:33:19.819291  297886 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1026 08:33:19.835557  297886 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I1026 08:33:19.852845  297886 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1026 08:33:19.870227  297886 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I1026 08:33:19.883934  297886 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1026 08:33:19.887638  297886 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1026 08:33:19.897601  297886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:33:19.991885  297886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:33:20.025278  297886 certs.go:69] Setting up /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992 for IP: 192.168.76.2
	I1026 08:33:20.025302  297886 certs.go:195] generating shared ca certs ...
	I1026 08:33:20.025323  297886 certs.go:227] acquiring lock for ca certs: {Name:mk5d0918d5480563f897de15e1280a1ade3ea7ea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.025477  297886 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key
	I1026 08:33:20.025534  297886 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key
	I1026 08:33:20.025546  297886 certs.go:257] generating profile certs ...
	I1026 08:33:20.025599  297886 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/client.key
	I1026 08:33:20.025611  297886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/client.crt with IP's: []
	I1026 08:33:20.078753  297886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/client.crt ...
	I1026 08:33:20.078783  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/client.crt: {Name:mkf9cfad17be61bc1319469d32827e7697fee50a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.078981  297886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/client.key ...
	I1026 08:33:20.078998  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/client.key: {Name:mk24a463c89249ff97baea6d0c80b2fbfc1e46b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.079103  297886 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.key.43060bde
	I1026 08:33:20.079129  297886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.crt.43060bde with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1026 08:33:20.108951  290986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:20.185950  290986 kubeadm.go:1113] duration metric: took 3.696749644s to wait for elevateKubeSystemPrivileges
	I1026 08:33:20.185987  290986 kubeadm.go:402] duration metric: took 17.693833072s to StartCluster
	I1026 08:33:20.186006  290986 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.186071  290986 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:33:20.187283  290986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.187489  290986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 08:33:20.187499  290986 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:33:20.187591  290986 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:33:20.187704  290986 addons.go:69] Setting storage-provisioner=true in profile "calico-110992"
	I1026 08:33:20.187726  290986 addons.go:238] Setting addon storage-provisioner=true in "calico-110992"
	I1026 08:33:20.187760  290986 host.go:66] Checking if "calico-110992" exists ...
	I1026 08:33:20.187778  290986 addons.go:69] Setting default-storageclass=true in profile "calico-110992"
	I1026 08:33:20.187801  290986 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-110992"
	I1026 08:33:20.187682  290986 config.go:182] Loaded profile config "calico-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:20.188577  290986 cli_runner.go:164] Run: docker container inspect calico-110992 --format={{.State.Status}}
	I1026 08:33:20.188656  290986 cli_runner.go:164] Run: docker container inspect calico-110992 --format={{.State.Status}}
	I1026 08:33:20.191942  290986 out.go:179] * Verifying Kubernetes components...
	I1026 08:33:20.193489  290986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:33:20.217421  290986 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:33:20.218952  290986 addons.go:238] Setting addon default-storageclass=true in "calico-110992"
	I1026 08:33:20.219070  290986 host.go:66] Checking if "calico-110992" exists ...
	I1026 08:33:20.218979  290986 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:33:20.219144  290986 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:33:20.219203  290986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-110992
	I1026 08:33:20.219471  290986 cli_runner.go:164] Run: docker container inspect calico-110992 --format={{.State.Status}}
	I1026 08:33:20.253351  290986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/calico-110992/id_rsa Username:docker}
	I1026 08:33:20.256191  290986 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:33:20.256213  290986 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:33:20.256409  290986 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-110992
	I1026 08:33:20.282596  290986 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33116 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/calico-110992/id_rsa Username:docker}
	I1026 08:33:20.301461  290986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 08:33:20.346748  290986 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:33:20.383770  290986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:33:20.433235  290986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:33:20.503779  290986 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1026 08:33:20.505054  290986 node_ready.go:35] waiting up to 15m0s for node "calico-110992" to be "Ready" ...
	I1026 08:33:20.722723  290986 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	W1026 08:33:17.834747  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:19.835021  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	I1026 08:33:20.311137  297886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.crt.43060bde ...
	I1026 08:33:20.311168  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.crt.43060bde: {Name:mk4031590c5d52a48bbf59d2a7a3c9dc14a78dde Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.311404  297886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.key.43060bde ...
	I1026 08:33:20.311427  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.key.43060bde: {Name:mk238b6e77d40947448f51b642a99e2a2db52447 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.311546  297886 certs.go:382] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.crt.43060bde -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.crt
	I1026 08:33:20.311642  297886 certs.go:386] copying /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.key.43060bde -> /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.key
	I1026 08:33:20.311727  297886 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.key
	I1026 08:33:20.311749  297886 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.crt with IP's: []
	I1026 08:33:20.540592  297886 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.crt ...
	I1026 08:33:20.540679  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.crt: {Name:mkb6ba530b5c93eca805e4b599bc336817efe544 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.540892  297886 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.key ...
	I1026 08:33:20.540939  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.key: {Name:mkad407034e7e52156d860185c4a4dac9857a417 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:20.541211  297886 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem (1338 bytes)
	W1026 08:33:20.541300  297886 certs.go:480] ignoring /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921_empty.pem, impossibly tiny 0 bytes
	I1026 08:33:20.541329  297886 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca-key.pem (1675 bytes)
	I1026 08:33:20.541396  297886 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/ca.pem (1078 bytes)
	I1026 08:33:20.541447  297886 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/cert.pem (1123 bytes)
	I1026 08:33:20.541497  297886 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/certs/key.pem (1675 bytes)
	I1026 08:33:20.541568  297886 certs.go:484] found cert: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem (1708 bytes)
	I1026 08:33:20.542445  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1026 08:33:20.565951  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1026 08:33:20.585809  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1026 08:33:20.604494  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1026 08:33:20.623740  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1026 08:33:20.642477  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1026 08:33:20.661443  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1026 08:33:20.682669  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/custom-flannel-110992/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1026 08:33:20.704364  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/ssl/certs/129212.pem --> /usr/share/ca-certificates/129212.pem (1708 bytes)
	I1026 08:33:20.727187  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1026 08:33:20.748749  297886 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21772-9429/.minikube/certs/12921.pem --> /usr/share/ca-certificates/12921.pem (1338 bytes)
	I1026 08:33:20.771002  297886 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1026 08:33:20.784879  297886 ssh_runner.go:195] Run: openssl version
	I1026 08:33:20.792074  297886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1026 08:33:20.802194  297886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:33:20.806530  297886 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Oct 26 07:47 /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:33:20.806589  297886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1026 08:33:20.846226  297886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1026 08:33:20.856299  297886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/12921.pem && ln -fs /usr/share/ca-certificates/12921.pem /etc/ssl/certs/12921.pem"
	I1026 08:33:20.865343  297886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/12921.pem
	I1026 08:33:20.869325  297886 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Oct 26 07:53 /usr/share/ca-certificates/12921.pem
	I1026 08:33:20.869366  297886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/12921.pem
	I1026 08:33:20.906905  297886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/12921.pem /etc/ssl/certs/51391683.0"
	I1026 08:33:20.917156  297886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/129212.pem && ln -fs /usr/share/ca-certificates/129212.pem /etc/ssl/certs/129212.pem"
	I1026 08:33:20.927077  297886 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/129212.pem
	I1026 08:33:20.931045  297886 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Oct 26 07:53 /usr/share/ca-certificates/129212.pem
	I1026 08:33:20.931107  297886 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/129212.pem
	I1026 08:33:20.976592  297886 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/129212.pem /etc/ssl/certs/3ec20f2e.0"
	I1026 08:33:20.987042  297886 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1026 08:33:20.991569  297886 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1026 08:33:20.991634  297886 kubeadm.go:400] StartCluster: {Name:custom-flannel-110992 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-110992 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSL
og:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 08:33:20.991716  297886 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1026 08:33:20.991775  297886 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1026 08:33:21.033486  297886 cri.go:89] found id: ""
	I1026 08:33:21.033561  297886 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1026 08:33:21.048893  297886 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1026 08:33:21.059920  297886 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1026 08:33:21.059989  297886 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1026 08:33:21.071180  297886 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1026 08:33:21.071207  297886 kubeadm.go:157] found existing configuration files:
	
	I1026 08:33:21.071278  297886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1026 08:33:21.082833  297886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1026 08:33:21.082898  297886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1026 08:33:21.093537  297886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1026 08:33:21.103413  297886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1026 08:33:21.103481  297886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1026 08:33:21.113157  297886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1026 08:33:21.123721  297886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1026 08:33:21.123779  297886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1026 08:33:21.133865  297886 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1026 08:33:21.144812  297886 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1026 08:33:21.144880  297886 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1026 08:33:21.155342  297886 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1026 08:33:21.205677  297886 kubeadm.go:318] [init] Using Kubernetes version: v1.34.1
	I1026 08:33:21.205742  297886 kubeadm.go:318] [preflight] Running pre-flight checks
	I1026 08:33:21.234895  297886 kubeadm.go:318] [preflight] The system verification failed. Printing the output from the verification:
	I1026 08:33:21.235030  297886 kubeadm.go:318] KERNEL_VERSION: 6.8.0-1042-gcp
	I1026 08:33:21.235099  297886 kubeadm.go:318] OS: Linux
	I1026 08:33:21.235311  297886 kubeadm.go:318] CGROUPS_CPU: enabled
	I1026 08:33:21.235417  297886 kubeadm.go:318] CGROUPS_CPUSET: enabled
	I1026 08:33:21.235501  297886 kubeadm.go:318] CGROUPS_DEVICES: enabled
	I1026 08:33:21.235585  297886 kubeadm.go:318] CGROUPS_FREEZER: enabled
	I1026 08:33:21.235670  297886 kubeadm.go:318] CGROUPS_MEMORY: enabled
	I1026 08:33:21.235754  297886 kubeadm.go:318] CGROUPS_PIDS: enabled
	I1026 08:33:21.235826  297886 kubeadm.go:318] CGROUPS_HUGETLB: enabled
	I1026 08:33:21.235877  297886 kubeadm.go:318] CGROUPS_IO: enabled
	I1026 08:33:21.315186  297886 kubeadm.go:318] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1026 08:33:21.315368  297886 kubeadm.go:318] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1026 08:33:21.315503  297886 kubeadm.go:318] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1026 08:33:21.328277  297886 kubeadm.go:318] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	W1026 08:33:18.852227  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	W1026 08:33:20.852422  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	I1026 08:33:20.724009  290986 addons.go:514] duration metric: took 536.413433ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 08:33:21.009612  290986 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-110992" context rescaled to 1 replicas
	W1026 08:33:22.508334  290986 node_ready.go:57] node "calico-110992" has "Ready":"False" status (will retry)
	I1026 08:33:24.508345  290986 node_ready.go:49] node "calico-110992" is "Ready"
	I1026 08:33:24.508375  290986 node_ready.go:38] duration metric: took 4.003295607s for node "calico-110992" to be "Ready" ...
	I1026 08:33:24.508390  290986 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:33:24.508449  290986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:33:24.524003  290986 api_server.go:72] duration metric: took 4.336466809s to wait for apiserver process to appear ...
	I1026 08:33:24.524033  290986 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:33:24.524056  290986 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1026 08:33:24.530501  290986 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1026 08:33:24.531804  290986 api_server.go:141] control plane version: v1.34.1
	I1026 08:33:24.531836  290986 api_server.go:131] duration metric: took 7.796123ms to wait for apiserver health ...
	I1026 08:33:24.531847  290986 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:33:24.535744  290986 system_pods.go:59] 9 kube-system pods found
	I1026 08:33:24.535783  290986 system_pods.go:61] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:24.535795  290986 system_pods.go:61] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:24.535806  290986 system_pods.go:61] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:24.535812  290986 system_pods.go:61] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:24.535818  290986 system_pods.go:61] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:24.535821  290986 system_pods.go:61] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:24.535825  290986 system_pods.go:61] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:24.535830  290986 system_pods.go:61] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:24.535835  290986 system_pods.go:61] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:24.535842  290986 system_pods.go:74] duration metric: took 3.98769ms to wait for pod list to return data ...
	I1026 08:33:24.535852  290986 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:33:24.538590  290986 default_sa.go:45] found service account: "default"
	I1026 08:33:24.538610  290986 default_sa.go:55] duration metric: took 2.752036ms for default service account to be created ...
	I1026 08:33:24.538620  290986 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:33:24.542184  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:24.542231  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:24.542268  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:24.542279  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:24.542291  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:24.542301  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:24.542307  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:24.542314  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:24.542319  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:24.542329  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:24.542366  290986 retry.go:31] will retry after 194.395436ms: missing components: kube-dns
	I1026 08:33:24.742475  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:24.742518  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:24.742532  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:24.742542  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:24.742548  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:24.742558  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:24.742565  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:24.742571  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:24.742576  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:24.742584  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:24.742602  290986 retry.go:31] will retry after 353.307385ms: missing components: kube-dns
	I1026 08:33:21.331168  297886 out.go:252]   - Generating certificates and keys ...
	I1026 08:33:21.331331  297886 kubeadm.go:318] [certs] Using existing ca certificate authority
	I1026 08:33:21.331445  297886 kubeadm.go:318] [certs] Using existing apiserver certificate and key on disk
	I1026 08:33:22.015283  297886 kubeadm.go:318] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1026 08:33:22.618691  297886 kubeadm.go:318] [certs] Generating "front-proxy-ca" certificate and key
	I1026 08:33:23.104400  297886 kubeadm.go:318] [certs] Generating "front-proxy-client" certificate and key
	I1026 08:33:23.361892  297886 kubeadm.go:318] [certs] Generating "etcd/ca" certificate and key
	I1026 08:33:23.778518  297886 kubeadm.go:318] [certs] Generating "etcd/server" certificate and key
	I1026 08:33:23.778702  297886 kubeadm.go:318] [certs] etcd/server serving cert is signed for DNS names [custom-flannel-110992 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 08:33:24.343893  297886 kubeadm.go:318] [certs] Generating "etcd/peer" certificate and key
	I1026 08:33:24.344080  297886 kubeadm.go:318] [certs] etcd/peer serving cert is signed for DNS names [custom-flannel-110992 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1026 08:33:24.643312  297886 kubeadm.go:318] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1026 08:33:24.895308  297886 kubeadm.go:318] [certs] Generating "apiserver-etcd-client" certificate and key
	W1026 08:33:22.334949  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:24.835300  285842 pod_ready.go:104] pod "coredns-66bc5c9577-h4dk5" is not "Ready", error: <nil>
	W1026 08:33:23.351901  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	W1026 08:33:25.352221  278592 node_ready.go:57] node "kindnet-110992" has "Ready":"False" status (will retry)
	I1026 08:33:25.176121  297886 kubeadm.go:318] [certs] Generating "sa" key and public key
	I1026 08:33:25.176324  297886 kubeadm.go:318] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1026 08:33:25.683609  297886 kubeadm.go:318] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1026 08:33:26.263789  297886 kubeadm.go:318] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1026 08:33:26.577514  297886 kubeadm.go:318] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1026 08:33:27.292539  297886 kubeadm.go:318] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1026 08:33:27.451683  297886 kubeadm.go:318] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1026 08:33:27.452192  297886 kubeadm.go:318] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1026 08:33:27.457326  297886 kubeadm.go:318] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1026 08:33:26.335292  285842 pod_ready.go:94] pod "coredns-66bc5c9577-h4dk5" is "Ready"
	I1026 08:33:26.335321  285842 pod_ready.go:86] duration metric: took 38.006630881s for pod "coredns-66bc5c9577-h4dk5" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.338390  285842 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.343606  285842 pod_ready.go:94] pod "etcd-default-k8s-diff-port-866212" is "Ready"
	I1026 08:33:26.343629  285842 pod_ready.go:86] duration metric: took 5.218305ms for pod "etcd-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.345751  285842 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.350501  285842 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-866212" is "Ready"
	I1026 08:33:26.350527  285842 pod_ready.go:86] duration metric: took 4.751106ms for pod "kube-apiserver-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.354562  285842 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.533352  285842 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-866212" is "Ready"
	I1026 08:33:26.533378  285842 pod_ready.go:86] duration metric: took 178.790577ms for pod "kube-controller-manager-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:26.732903  285842 pod_ready.go:83] waiting for pod "kube-proxy-m4gfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.132984  285842 pod_ready.go:94] pod "kube-proxy-m4gfc" is "Ready"
	I1026 08:33:27.133011  285842 pod_ready.go:86] duration metric: took 400.078265ms for pod "kube-proxy-m4gfc" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.332350  285842 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.732939  285842 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-866212" is "Ready"
	I1026 08:33:27.732969  285842 pod_ready.go:86] duration metric: took 400.58918ms for pod "kube-scheduler-default-k8s-diff-port-866212" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.732991  285842 pod_ready.go:40] duration metric: took 39.407435398s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:33:27.790045  285842 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:33:27.814280  285842 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-866212" cluster and "default" namespace by default
	I1026 08:33:26.852036  278592 node_ready.go:49] node "kindnet-110992" is "Ready"
	I1026 08:33:26.852069  278592 node_ready.go:38] duration metric: took 41.003339979s for node "kindnet-110992" to be "Ready" ...
	I1026 08:33:26.852086  278592 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:33:26.852160  278592 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:33:26.868892  278592 api_server.go:72] duration metric: took 41.533945796s to wait for apiserver process to appear ...
	I1026 08:33:26.868922  278592 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:33:26.868946  278592 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1026 08:33:26.874559  278592 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1026 08:33:26.876180  278592 api_server.go:141] control plane version: v1.34.1
	I1026 08:33:26.876208  278592 api_server.go:131] duration metric: took 7.277307ms to wait for apiserver health ...
	I1026 08:33:26.876218  278592 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:33:26.879868  278592 system_pods.go:59] 8 kube-system pods found
	I1026 08:33:26.879909  278592 system_pods.go:61] "coredns-66bc5c9577-ttrbv" [35230487-1224-48fb-a9c4-b038c685ec4d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:26.879917  278592 system_pods.go:61] "etcd-kindnet-110992" [b25ca5c2-33c4-4011-9f80-bcab5eeb5ed3] Running
	I1026 08:33:26.879926  278592 system_pods.go:61] "kindnet-hxqqs" [c9f8f4f2-3683-4a4c-b19b-1758fdfb707d] Running
	I1026 08:33:26.879937  278592 system_pods.go:61] "kube-apiserver-kindnet-110992" [f18c71e6-a4d4-4c40-91da-c8dfa305c1b2] Running
	I1026 08:33:26.879943  278592 system_pods.go:61] "kube-controller-manager-kindnet-110992" [56ea173a-868d-4ef6-a79a-20444cccf927] Running
	I1026 08:33:26.879954  278592 system_pods.go:61] "kube-proxy-kfcp7" [0a2a6415-a06d-45ff-bb75-332e5725c78d] Running
	I1026 08:33:26.879959  278592 system_pods.go:61] "kube-scheduler-kindnet-110992" [7190bcbc-eb05-438a-b011-c23a0f50d594] Running
	I1026 08:33:26.879967  278592 system_pods.go:61] "storage-provisioner" [cded74ea-fdd3-46bd-8f2b-eddb96fb8e01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:26.879979  278592 system_pods.go:74] duration metric: took 3.754746ms to wait for pod list to return data ...
	I1026 08:33:26.879992  278592 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:33:26.883013  278592 default_sa.go:45] found service account: "default"
	I1026 08:33:26.883039  278592 default_sa.go:55] duration metric: took 3.035973ms for default service account to be created ...
	I1026 08:33:26.883051  278592 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:33:26.885872  278592 system_pods.go:86] 8 kube-system pods found
	I1026 08:33:26.885903  278592 system_pods.go:89] "coredns-66bc5c9577-ttrbv" [35230487-1224-48fb-a9c4-b038c685ec4d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:26.885911  278592 system_pods.go:89] "etcd-kindnet-110992" [b25ca5c2-33c4-4011-9f80-bcab5eeb5ed3] Running
	I1026 08:33:26.885919  278592 system_pods.go:89] "kindnet-hxqqs" [c9f8f4f2-3683-4a4c-b19b-1758fdfb707d] Running
	I1026 08:33:26.885924  278592 system_pods.go:89] "kube-apiserver-kindnet-110992" [f18c71e6-a4d4-4c40-91da-c8dfa305c1b2] Running
	I1026 08:33:26.885929  278592 system_pods.go:89] "kube-controller-manager-kindnet-110992" [56ea173a-868d-4ef6-a79a-20444cccf927] Running
	I1026 08:33:26.885938  278592 system_pods.go:89] "kube-proxy-kfcp7" [0a2a6415-a06d-45ff-bb75-332e5725c78d] Running
	I1026 08:33:26.885942  278592 system_pods.go:89] "kube-scheduler-kindnet-110992" [7190bcbc-eb05-438a-b011-c23a0f50d594] Running
	I1026 08:33:26.885952  278592 system_pods.go:89] "storage-provisioner" [cded74ea-fdd3-46bd-8f2b-eddb96fb8e01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:26.885984  278592 retry.go:31] will retry after 231.523004ms: missing components: kube-dns
	I1026 08:33:27.124430  278592 system_pods.go:86] 8 kube-system pods found
	I1026 08:33:27.124474  278592 system_pods.go:89] "coredns-66bc5c9577-ttrbv" [35230487-1224-48fb-a9c4-b038c685ec4d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:27.124484  278592 system_pods.go:89] "etcd-kindnet-110992" [b25ca5c2-33c4-4011-9f80-bcab5eeb5ed3] Running
	I1026 08:33:27.124495  278592 system_pods.go:89] "kindnet-hxqqs" [c9f8f4f2-3683-4a4c-b19b-1758fdfb707d] Running
	I1026 08:33:27.124502  278592 system_pods.go:89] "kube-apiserver-kindnet-110992" [f18c71e6-a4d4-4c40-91da-c8dfa305c1b2] Running
	I1026 08:33:27.124509  278592 system_pods.go:89] "kube-controller-manager-kindnet-110992" [56ea173a-868d-4ef6-a79a-20444cccf927] Running
	I1026 08:33:27.124516  278592 system_pods.go:89] "kube-proxy-kfcp7" [0a2a6415-a06d-45ff-bb75-332e5725c78d] Running
	I1026 08:33:27.124521  278592 system_pods.go:89] "kube-scheduler-kindnet-110992" [7190bcbc-eb05-438a-b011-c23a0f50d594] Running
	I1026 08:33:27.124533  278592 system_pods.go:89] "storage-provisioner" [cded74ea-fdd3-46bd-8f2b-eddb96fb8e01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:27.124553  278592 retry.go:31] will retry after 262.637807ms: missing components: kube-dns
	I1026 08:33:27.392833  278592 system_pods.go:86] 8 kube-system pods found
	I1026 08:33:27.392870  278592 system_pods.go:89] "coredns-66bc5c9577-ttrbv" [35230487-1224-48fb-a9c4-b038c685ec4d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:27.392877  278592 system_pods.go:89] "etcd-kindnet-110992" [b25ca5c2-33c4-4011-9f80-bcab5eeb5ed3] Running
	I1026 08:33:27.392886  278592 system_pods.go:89] "kindnet-hxqqs" [c9f8f4f2-3683-4a4c-b19b-1758fdfb707d] Running
	I1026 08:33:27.392892  278592 system_pods.go:89] "kube-apiserver-kindnet-110992" [f18c71e6-a4d4-4c40-91da-c8dfa305c1b2] Running
	I1026 08:33:27.392897  278592 system_pods.go:89] "kube-controller-manager-kindnet-110992" [56ea173a-868d-4ef6-a79a-20444cccf927] Running
	I1026 08:33:27.392904  278592 system_pods.go:89] "kube-proxy-kfcp7" [0a2a6415-a06d-45ff-bb75-332e5725c78d] Running
	I1026 08:33:27.392908  278592 system_pods.go:89] "kube-scheduler-kindnet-110992" [7190bcbc-eb05-438a-b011-c23a0f50d594] Running
	I1026 08:33:27.392916  278592 system_pods.go:89] "storage-provisioner" [cded74ea-fdd3-46bd-8f2b-eddb96fb8e01] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:27.392932  278592 retry.go:31] will retry after 396.466485ms: missing components: kube-dns
	I1026 08:33:27.794753  278592 system_pods.go:86] 8 kube-system pods found
	I1026 08:33:27.794796  278592 system_pods.go:89] "coredns-66bc5c9577-ttrbv" [35230487-1224-48fb-a9c4-b038c685ec4d] Running
	I1026 08:33:27.794803  278592 system_pods.go:89] "etcd-kindnet-110992" [b25ca5c2-33c4-4011-9f80-bcab5eeb5ed3] Running
	I1026 08:33:27.794809  278592 system_pods.go:89] "kindnet-hxqqs" [c9f8f4f2-3683-4a4c-b19b-1758fdfb707d] Running
	I1026 08:33:27.794814  278592 system_pods.go:89] "kube-apiserver-kindnet-110992" [f18c71e6-a4d4-4c40-91da-c8dfa305c1b2] Running
	I1026 08:33:27.794819  278592 system_pods.go:89] "kube-controller-manager-kindnet-110992" [56ea173a-868d-4ef6-a79a-20444cccf927] Running
	I1026 08:33:27.794825  278592 system_pods.go:89] "kube-proxy-kfcp7" [0a2a6415-a06d-45ff-bb75-332e5725c78d] Running
	I1026 08:33:27.794831  278592 system_pods.go:89] "kube-scheduler-kindnet-110992" [7190bcbc-eb05-438a-b011-c23a0f50d594] Running
	I1026 08:33:27.794837  278592 system_pods.go:89] "storage-provisioner" [cded74ea-fdd3-46bd-8f2b-eddb96fb8e01] Running
	I1026 08:33:27.794849  278592 system_pods.go:126] duration metric: took 911.792108ms to wait for k8s-apps to be running ...
	I1026 08:33:27.794863  278592 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:33:27.794912  278592 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:33:27.810080  278592 system_svc.go:56] duration metric: took 15.211284ms WaitForService to wait for kubelet
	I1026 08:33:27.810104  278592 kubeadm.go:586] duration metric: took 42.475165714s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:33:27.810121  278592 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:33:27.867034  278592 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:33:27.867065  278592 node_conditions.go:123] node cpu capacity is 8
	I1026 08:33:27.867081  278592 node_conditions.go:105] duration metric: took 56.954334ms to run NodePressure ...
	I1026 08:33:27.867096  278592 start.go:241] waiting for startup goroutines ...
	I1026 08:33:27.867109  278592 start.go:246] waiting for cluster config update ...
	I1026 08:33:27.867123  278592 start.go:255] writing updated cluster config ...
	I1026 08:33:27.867426  278592 ssh_runner.go:195] Run: rm -f paused
	I1026 08:33:27.873059  278592 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:33:27.877536  278592 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ttrbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.884920  278592 pod_ready.go:94] pod "coredns-66bc5c9577-ttrbv" is "Ready"
	I1026 08:33:27.884953  278592 pod_ready.go:86] duration metric: took 7.39026ms for pod "coredns-66bc5c9577-ttrbv" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.887303  278592 pod_ready.go:83] waiting for pod "etcd-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.891874  278592 pod_ready.go:94] pod "etcd-kindnet-110992" is "Ready"
	I1026 08:33:27.891895  278592 pod_ready.go:86] duration metric: took 4.572409ms for pod "etcd-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.894092  278592 pod_ready.go:83] waiting for pod "kube-apiserver-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.898352  278592 pod_ready.go:94] pod "kube-apiserver-kindnet-110992" is "Ready"
	I1026 08:33:27.898384  278592 pod_ready.go:86] duration metric: took 4.257267ms for pod "kube-apiserver-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:27.900375  278592 pod_ready.go:83] waiting for pod "kube-controller-manager-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:28.389650  278592 pod_ready.go:94] pod "kube-controller-manager-kindnet-110992" is "Ready"
	I1026 08:33:28.389675  278592 pod_ready.go:86] duration metric: took 489.281702ms for pod "kube-controller-manager-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:28.478506  278592 pod_ready.go:83] waiting for pod "kube-proxy-kfcp7" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:28.877973  278592 pod_ready.go:94] pod "kube-proxy-kfcp7" is "Ready"
	I1026 08:33:28.878007  278592 pod_ready.go:86] duration metric: took 399.472679ms for pod "kube-proxy-kfcp7" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:29.078527  278592 pod_ready.go:83] waiting for pod "kube-scheduler-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:29.478790  278592 pod_ready.go:94] pod "kube-scheduler-kindnet-110992" is "Ready"
	I1026 08:33:29.478820  278592 pod_ready.go:86] duration metric: took 400.264656ms for pod "kube-scheduler-kindnet-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:29.478834  278592 pod_ready.go:40] duration metric: took 1.60571722s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:33:29.540118  278592 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:33:29.542480  278592 out.go:179] * Done! kubectl is now configured to use "kindnet-110992" cluster and "default" namespace by default
	I1026 08:33:25.099957  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:25.099988  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:25.099997  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:25.100003  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:25.100008  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:25.100015  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:25.100019  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:25.100024  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:25.100035  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:25.100040  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:25.100062  290986 retry.go:31] will retry after 429.717297ms: missing components: kube-dns
	I1026 08:33:25.535216  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:25.535287  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:25.535301  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:25.535314  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:25.535332  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:25.535343  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:25.535352  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:25.535358  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:25.535367  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:25.535373  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:25.535400  290986 retry.go:31] will retry after 547.962369ms: missing components: kube-dns
	I1026 08:33:26.089280  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:26.089328  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:26.089342  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:26.089353  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:26.089367  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:26.089374  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:26.089380  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:26.089386  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:26.089392  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:26.089397  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:26.089416  290986 retry.go:31] will retry after 584.273436ms: missing components: kube-dns
	I1026 08:33:26.679010  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:26.679050  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:26.679068  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:26.679077  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:26.679084  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:26.679092  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:26.679110  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:26.679116  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:26.679121  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:26.679126  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:26.679146  290986 retry.go:31] will retry after 910.300944ms: missing components: kube-dns
	I1026 08:33:27.593986  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:27.594022  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:27.594034  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:27.594055  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:27.594076  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:27.594085  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:27.594090  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:27.594095  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:27.594101  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:27.594107  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:27.594130  290986 retry.go:31] will retry after 1.010302158s: missing components: kube-dns
	I1026 08:33:28.609186  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:28.609222  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:28.609234  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:28.609244  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:28.609281  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:28.609289  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:28.609297  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:28.609303  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:28.609311  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:28.609316  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:28.609340  290986 retry.go:31] will retry after 1.223875932s: missing components: kube-dns
	I1026 08:33:29.837467  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:29.837502  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:29.837515  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Pending / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:29.837525  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:29.837531  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:29.837537  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:29.837544  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:29.837550  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:29.837557  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:29.837565  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:29.837583  290986 retry.go:31] will retry after 1.661573564s: missing components: kube-dns
	I1026 08:33:27.459108  297886 out.go:252]   - Booting up control plane ...
	I1026 08:33:27.459280  297886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1026 08:33:27.460652  297886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1026 08:33:27.461830  297886 kubeadm.go:318] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1026 08:33:27.481200  297886 kubeadm.go:318] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1026 08:33:27.481380  297886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1026 08:33:27.490068  297886 kubeadm.go:318] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1026 08:33:27.490380  297886 kubeadm.go:318] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1026 08:33:27.490423  297886 kubeadm.go:318] [kubelet-start] Starting the kubelet
	I1026 08:33:27.630525  297886 kubeadm.go:318] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1026 08:33:27.630675  297886 kubeadm.go:318] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1026 08:33:28.634366  297886 kubeadm.go:318] [kubelet-check] The kubelet is healthy after 1.001970821s
	I1026 08:33:28.636662  297886 kubeadm.go:318] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1026 08:33:28.636814  297886 kubeadm.go:318] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1026 08:33:28.636934  297886 kubeadm.go:318] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1026 08:33:28.637038  297886 kubeadm.go:318] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1026 08:33:30.141472  297886 kubeadm.go:318] [control-plane-check] kube-controller-manager is healthy after 1.504859912s
	I1026 08:33:30.964619  297886 kubeadm.go:318] [control-plane-check] kube-scheduler is healthy after 2.328035963s
	I1026 08:33:32.638296  297886 kubeadm.go:318] [control-plane-check] kube-apiserver is healthy after 4.001656106s
	I1026 08:33:32.649453  297886 kubeadm.go:318] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1026 08:33:32.662328  297886 kubeadm.go:318] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1026 08:33:32.672126  297886 kubeadm.go:318] [upload-certs] Skipping phase. Please see --upload-certs
	I1026 08:33:32.672402  297886 kubeadm.go:318] [mark-control-plane] Marking the node custom-flannel-110992 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1026 08:33:32.681301  297886 kubeadm.go:318] [bootstrap-token] Using token: d0sojn.85e0b0ka2cwunoiy
	I1026 08:33:32.682721  297886 out.go:252]   - Configuring RBAC rules ...
	I1026 08:33:32.682867  297886 kubeadm.go:318] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1026 08:33:32.686185  297886 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1026 08:33:32.691584  297886 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1026 08:33:32.694062  297886 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1026 08:33:32.697506  297886 kubeadm.go:318] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1026 08:33:32.699806  297886 kubeadm.go:318] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1026 08:33:33.043809  297886 kubeadm.go:318] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1026 08:33:33.461228  297886 kubeadm.go:318] [addons] Applied essential addon: CoreDNS
	I1026 08:33:34.044020  297886 kubeadm.go:318] [addons] Applied essential addon: kube-proxy
	I1026 08:33:34.044868  297886 kubeadm.go:318] 
	I1026 08:33:34.044981  297886 kubeadm.go:318] Your Kubernetes control-plane has initialized successfully!
	I1026 08:33:34.045010  297886 kubeadm.go:318] 
	I1026 08:33:34.045114  297886 kubeadm.go:318] To start using your cluster, you need to run the following as a regular user:
	I1026 08:33:34.045123  297886 kubeadm.go:318] 
	I1026 08:33:34.045173  297886 kubeadm.go:318]   mkdir -p $HOME/.kube
	I1026 08:33:34.045297  297886 kubeadm.go:318]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1026 08:33:34.045362  297886 kubeadm.go:318]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1026 08:33:34.045389  297886 kubeadm.go:318] 
	I1026 08:33:34.045482  297886 kubeadm.go:318] Alternatively, if you are the root user, you can run:
	I1026 08:33:34.045501  297886 kubeadm.go:318] 
	I1026 08:33:34.045570  297886 kubeadm.go:318]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1026 08:33:34.045578  297886 kubeadm.go:318] 
	I1026 08:33:34.045639  297886 kubeadm.go:318] You should now deploy a pod network to the cluster.
	I1026 08:33:34.045734  297886 kubeadm.go:318] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1026 08:33:34.045821  297886 kubeadm.go:318]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1026 08:33:34.045833  297886 kubeadm.go:318] 
	I1026 08:33:34.045931  297886 kubeadm.go:318] You can now join any number of control-plane nodes by copying certificate authorities
	I1026 08:33:34.046023  297886 kubeadm.go:318] and service account keys on each node and then running the following as root:
	I1026 08:33:34.046032  297886 kubeadm.go:318] 
	I1026 08:33:34.046147  297886 kubeadm.go:318]   kubeadm join control-plane.minikube.internal:8443 --token d0sojn.85e0b0ka2cwunoiy \
	I1026 08:33:34.046341  297886 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d \
	I1026 08:33:34.046381  297886 kubeadm.go:318] 	--control-plane 
	I1026 08:33:34.046392  297886 kubeadm.go:318] 
	I1026 08:33:34.046496  297886 kubeadm.go:318] Then you can join any number of worker nodes by running the following on each as root:
	I1026 08:33:34.046504  297886 kubeadm.go:318] 
	I1026 08:33:34.046596  297886 kubeadm.go:318] kubeadm join control-plane.minikube.internal:8443 --token d0sojn.85e0b0ka2cwunoiy \
	I1026 08:33:34.046736  297886 kubeadm.go:318] 	--discovery-token-ca-cert-hash sha256:3c3e594ebc6a9434be577b342cd1d18d3808516a671cdc3688503f0e3d6a248d 
	I1026 08:33:34.049671  297886 kubeadm.go:318] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1042-gcp\n", err: exit status 1
	I1026 08:33:34.049818  297886 kubeadm.go:318] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1026 08:33:34.049853  297886 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1026 08:33:34.052716  297886 out.go:179] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I1026 08:33:31.503752  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:31.503786  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:31.503798  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:31.503821  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:31.503833  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:31.503842  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:31.503847  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:31.503853  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:31.503859  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:31.503868  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:31.503887  290986 retry.go:31] will retry after 1.994361195s: missing components: kube-dns
	I1026 08:33:33.503528  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:33.503557  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:33.503565  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:33.503573  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:33.503577  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:33.503581  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:33.503585  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:33.503588  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:33.503592  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:33.503596  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:33.503609  290986 retry.go:31] will retry after 2.083828012s: missing components: kube-dns
	I1026 08:33:34.054031  297886 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1026 08:33:34.054077  297886 ssh_runner.go:195] Run: stat -c "%s %y" /var/tmp/minikube/cni.yaml
	I1026 08:33:34.058176  297886 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%s %y" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I1026 08:33:34.058201  297886 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I1026 08:33:34.077102  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1026 08:33:34.392471  297886 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1026 08:33:34.392576  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:34.392607  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes custom-flannel-110992 minikube.k8s.io/updated_at=2025_10_26T08_33_34_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4 minikube.k8s.io/name=custom-flannel-110992 minikube.k8s.io/primary=true
	I1026 08:33:34.402356  297886 ops.go:34] apiserver oom_adj: -16
	I1026 08:33:34.466388  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:34.967214  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:35.467464  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:35.967119  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:36.467466  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:36.967470  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:37.466533  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:37.967429  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:38.467177  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:38.967276  297886 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1026 08:33:39.035204  297886 kubeadm.go:1113] duration metric: took 4.642698439s to wait for elevateKubeSystemPrivileges
	I1026 08:33:39.035242  297886 kubeadm.go:402] duration metric: took 18.043610807s to StartCluster
	I1026 08:33:39.035290  297886 settings.go:142] acquiring lock: {Name:mk7953e8c7e359db9e13b550a80213a7a35d9abe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:39.035375  297886 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:33:39.037310  297886 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21772-9429/kubeconfig: {Name:mk2f16d4a02402bb1ce7ffb9ee15a12862bc8473 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1026 08:33:39.037544  297886 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1026 08:33:39.037559  297886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1026 08:33:39.037594  297886 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1026 08:33:39.037691  297886 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-110992"
	I1026 08:33:39.037699  297886 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-110992"
	I1026 08:33:39.037714  297886 addons.go:238] Setting addon storage-provisioner=true in "custom-flannel-110992"
	I1026 08:33:39.037724  297886 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-110992"
	I1026 08:33:39.037748  297886 host.go:66] Checking if "custom-flannel-110992" exists ...
	I1026 08:33:39.037788  297886 config.go:182] Loaded profile config "custom-flannel-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:33:39.038158  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Status}}
	I1026 08:33:39.038380  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Status}}
	I1026 08:33:39.039224  297886 out.go:179] * Verifying Kubernetes components...
	I1026 08:33:39.040672  297886 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1026 08:33:39.063422  297886 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1026 08:33:39.065318  297886 addons.go:238] Setting addon default-storageclass=true in "custom-flannel-110992"
	I1026 08:33:39.065365  297886 host.go:66] Checking if "custom-flannel-110992" exists ...
	I1026 08:33:39.065799  297886 cli_runner.go:164] Run: docker container inspect custom-flannel-110992 --format={{.State.Status}}
	I1026 08:33:39.066799  297886 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:33:39.066828  297886 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1026 08:33:39.066881  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:39.094849  297886 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I1026 08:33:39.094877  297886 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1026 08:33:39.094947  297886 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" custom-flannel-110992
	I1026 08:33:39.109131  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:39.131325  297886 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33121 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/custom-flannel-110992/id_rsa Username:docker}
	I1026 08:33:39.198800  297886 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1026 08:33:39.236296  297886 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1026 08:33:39.266297  297886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1026 08:33:39.275954  297886 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1026 08:33:39.372872  297886 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I1026 08:33:39.374220  297886 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-110992" to be "Ready" ...
	I1026 08:33:39.579998  297886 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1026 08:33:35.592120  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:35.592155  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:35.592167  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:35.592178  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:35.592184  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:35.592190  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:35.592195  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:35.592203  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:35.592206  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:35.592209  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:35.592223  290986 retry.go:31] will retry after 2.491065819s: missing components: kube-dns
	I1026 08:33:38.090037  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:38.090080  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I1026 08:33:38.090093  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I1026 08:33:38.090105  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:38.090112  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:38.090119  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:38.090126  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:38.090140  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:38.090145  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:38.090151  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:38.090206  290986 retry.go:31] will retry after 4.485660087s: missing components: kube-dns
	I1026 08:33:39.581430  297886 addons.go:514] duration metric: took 543.833117ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1026 08:33:39.877320  297886 kapi.go:214] "coredns" deployment in "kube-system" namespace and "custom-flannel-110992" context rescaled to 1 replicas
	I1026 08:33:42.581104  290986 system_pods.go:86] 9 kube-system pods found
	I1026 08:33:42.581130  290986 system_pods.go:89] "calico-kube-controllers-59556d9b4c-bz48f" [ffb985a1-f963-4035-bd4b-c8d8366655dc] Running
	I1026 08:33:42.581135  290986 system_pods.go:89] "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Running
	I1026 08:33:42.581139  290986 system_pods.go:89] "coredns-66bc5c9577-kljmz" [d726bdb5-98ac-4b87-a169-86954fede114] Running
	I1026 08:33:42.581142  290986 system_pods.go:89] "etcd-calico-110992" [dd2f053a-bb8a-476d-a3d5-f526b7d56e22] Running
	I1026 08:33:42.581145  290986 system_pods.go:89] "kube-apiserver-calico-110992" [cd72eb41-b43b-4a32-949a-060c84592720] Running
	I1026 08:33:42.581149  290986 system_pods.go:89] "kube-controller-manager-calico-110992" [a194984b-9b81-4360-924e-83d0913cc890] Running
	I1026 08:33:42.581152  290986 system_pods.go:89] "kube-proxy-rcpjp" [f3e4fe19-69c8-475d-b8a1-1da03254f946] Running
	I1026 08:33:42.581156  290986 system_pods.go:89] "kube-scheduler-calico-110992" [75f24dc4-b530-4d93-8bc9-a804253bab96] Running
	I1026 08:33:42.581159  290986 system_pods.go:89] "storage-provisioner" [cc2ef5ef-090c-481d-a858-15537b8605d9] Running
	I1026 08:33:42.581167  290986 system_pods.go:126] duration metric: took 18.042540232s to wait for k8s-apps to be running ...
	I1026 08:33:42.581173  290986 system_svc.go:44] waiting for kubelet service to be running ....
	I1026 08:33:42.581214  290986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:33:42.595551  290986 system_svc.go:56] duration metric: took 14.367606ms WaitForService to wait for kubelet
	I1026 08:33:42.595581  290986 kubeadm.go:586] duration metric: took 22.408055576s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1026 08:33:42.595601  290986 node_conditions.go:102] verifying NodePressure condition ...
	I1026 08:33:42.598111  290986 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1026 08:33:42.598130  290986 node_conditions.go:123] node cpu capacity is 8
	I1026 08:33:42.598144  290986 node_conditions.go:105] duration metric: took 2.536387ms to run NodePressure ...
	I1026 08:33:42.598154  290986 start.go:241] waiting for startup goroutines ...
	I1026 08:33:42.598165  290986 start.go:246] waiting for cluster config update ...
	I1026 08:33:42.598174  290986 start.go:255] writing updated cluster config ...
	I1026 08:33:42.598443  290986 ssh_runner.go:195] Run: rm -f paused
	I1026 08:33:42.602299  290986 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:33:42.605668  290986 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-kljmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:42.610081  290986 pod_ready.go:94] pod "coredns-66bc5c9577-kljmz" is "Ready"
	I1026 08:33:42.610102  290986 pod_ready.go:86] duration metric: took 4.411666ms for pod "coredns-66bc5c9577-kljmz" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:42.612101  290986 pod_ready.go:83] waiting for pod "etcd-calico-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:42.616028  290986 pod_ready.go:94] pod "etcd-calico-110992" is "Ready"
	I1026 08:33:42.616049  290986 pod_ready.go:86] duration metric: took 3.924377ms for pod "etcd-calico-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:42.618073  290986 pod_ready.go:83] waiting for pod "kube-apiserver-calico-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:42.621815  290986 pod_ready.go:94] pod "kube-apiserver-calico-110992" is "Ready"
	I1026 08:33:42.621833  290986 pod_ready.go:86] duration metric: took 3.744186ms for pod "kube-apiserver-calico-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:42.623775  290986 pod_ready.go:83] waiting for pod "kube-controller-manager-calico-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:43.006866  290986 pod_ready.go:94] pod "kube-controller-manager-calico-110992" is "Ready"
	I1026 08:33:43.006898  290986 pod_ready.go:86] duration metric: took 383.104362ms for pod "kube-controller-manager-calico-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:43.207764  290986 pod_ready.go:83] waiting for pod "kube-proxy-rcpjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:43.607444  290986 pod_ready.go:94] pod "kube-proxy-rcpjp" is "Ready"
	I1026 08:33:43.607471  290986 pod_ready.go:86] duration metric: took 399.685646ms for pod "kube-proxy-rcpjp" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:43.806909  290986 pod_ready.go:83] waiting for pod "kube-scheduler-calico-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:44.206604  290986 pod_ready.go:94] pod "kube-scheduler-calico-110992" is "Ready"
	I1026 08:33:44.206634  290986 pod_ready.go:86] duration metric: took 399.700054ms for pod "kube-scheduler-calico-110992" in "kube-system" namespace to be "Ready" or be gone ...
	I1026 08:33:44.206649  290986 pod_ready.go:40] duration metric: took 1.604322315s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1026 08:33:44.256550  290986 start.go:624] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1026 08:33:44.258087  290986 out.go:179] * Done! kubectl is now configured to use "calico-110992" cluster and "default" namespace by default
	W1026 08:33:41.377855  297886 node_ready.go:57] node "custom-flannel-110992" has "Ready":"False" status (will retry)
	I1026 08:33:41.876853  297886 node_ready.go:49] node "custom-flannel-110992" is "Ready"
	I1026 08:33:41.876885  297886 node_ready.go:38] duration metric: took 2.502585063s for node "custom-flannel-110992" to be "Ready" ...
	I1026 08:33:41.876898  297886 api_server.go:52] waiting for apiserver process to appear ...
	I1026 08:33:41.876942  297886 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:33:41.888719  297886 api_server.go:72] duration metric: took 2.851143336s to wait for apiserver process to appear ...
	I1026 08:33:41.888747  297886 api_server.go:88] waiting for apiserver healthz status ...
	I1026 08:33:41.888763  297886 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1026 08:33:41.893903  297886 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1026 08:33:41.894807  297886 api_server.go:141] control plane version: v1.34.1
	I1026 08:33:41.894828  297886 api_server.go:131] duration metric: took 6.075447ms to wait for apiserver health ...
	I1026 08:33:41.894836  297886 system_pods.go:43] waiting for kube-system pods to appear ...
	I1026 08:33:41.897993  297886 system_pods.go:59] 7 kube-system pods found
	I1026 08:33:41.898030  297886 system_pods.go:61] "coredns-66bc5c9577-kkb9q" [6805946e-e64c-415b-a6f1-19805e922ac3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:41.898041  297886 system_pods.go:61] "etcd-custom-flannel-110992" [bf9e0d1f-c4e0-4340-9067-f38717ad1042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:33:41.898055  297886 system_pods.go:61] "kube-apiserver-custom-flannel-110992" [81be8308-6eca-4a7d-9f5a-4fde9f404a09] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:41.898068  297886 system_pods.go:61] "kube-controller-manager-custom-flannel-110992" [13b42f4d-1a3c-4db1-8545-ba3e87ec36ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:33:41.898075  297886 system_pods.go:61] "kube-proxy-gwnzk" [18472f90-803d-4f35-bb24-cf57ce13fdec] Running
	I1026 08:33:41.898101  297886 system_pods.go:61] "kube-scheduler-custom-flannel-110992" [21522f96-d4a4-412f-85bb-455424aec270] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:33:41.898111  297886 system_pods.go:61] "storage-provisioner" [432c5dd5-7ff6-49f8-a143-3aa8178f7def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:41.898119  297886 system_pods.go:74] duration metric: took 3.27811ms to wait for pod list to return data ...
	I1026 08:33:41.898126  297886 default_sa.go:34] waiting for default service account to be created ...
	I1026 08:33:41.900164  297886 default_sa.go:45] found service account: "default"
	I1026 08:33:41.900179  297886 default_sa.go:55] duration metric: took 2.045163ms for default service account to be created ...
	I1026 08:33:41.900186  297886 system_pods.go:116] waiting for k8s-apps to be running ...
	I1026 08:33:41.902661  297886 system_pods.go:86] 7 kube-system pods found
	I1026 08:33:41.902682  297886 system_pods.go:89] "coredns-66bc5c9577-kkb9q" [6805946e-e64c-415b-a6f1-19805e922ac3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:41.902689  297886 system_pods.go:89] "etcd-custom-flannel-110992" [bf9e0d1f-c4e0-4340-9067-f38717ad1042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:33:41.902714  297886 system_pods.go:89] "kube-apiserver-custom-flannel-110992" [81be8308-6eca-4a7d-9f5a-4fde9f404a09] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:41.902723  297886 system_pods.go:89] "kube-controller-manager-custom-flannel-110992" [13b42f4d-1a3c-4db1-8545-ba3e87ec36ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:33:41.902727  297886 system_pods.go:89] "kube-proxy-gwnzk" [18472f90-803d-4f35-bb24-cf57ce13fdec] Running
	I1026 08:33:41.902734  297886 system_pods.go:89] "kube-scheduler-custom-flannel-110992" [21522f96-d4a4-412f-85bb-455424aec270] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:33:41.902754  297886 system_pods.go:89] "storage-provisioner" [432c5dd5-7ff6-49f8-a143-3aa8178f7def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:41.902787  297886 retry.go:31] will retry after 210.966008ms: missing components: kube-dns
	I1026 08:33:42.117960  297886 system_pods.go:86] 7 kube-system pods found
	I1026 08:33:42.117987  297886 system_pods.go:89] "coredns-66bc5c9577-kkb9q" [6805946e-e64c-415b-a6f1-19805e922ac3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:42.117999  297886 system_pods.go:89] "etcd-custom-flannel-110992" [bf9e0d1f-c4e0-4340-9067-f38717ad1042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:33:42.118009  297886 system_pods.go:89] "kube-apiserver-custom-flannel-110992" [81be8308-6eca-4a7d-9f5a-4fde9f404a09] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:42.118038  297886 system_pods.go:89] "kube-controller-manager-custom-flannel-110992" [13b42f4d-1a3c-4db1-8545-ba3e87ec36ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:33:42.118045  297886 system_pods.go:89] "kube-proxy-gwnzk" [18472f90-803d-4f35-bb24-cf57ce13fdec] Running
	I1026 08:33:42.118053  297886 system_pods.go:89] "kube-scheduler-custom-flannel-110992" [21522f96-d4a4-412f-85bb-455424aec270] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:33:42.118068  297886 system_pods.go:89] "storage-provisioner" [432c5dd5-7ff6-49f8-a143-3aa8178f7def] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1026 08:33:42.118085  297886 retry.go:31] will retry after 306.075118ms: missing components: kube-dns
	I1026 08:33:42.427826  297886 system_pods.go:86] 7 kube-system pods found
	I1026 08:33:42.427864  297886 system_pods.go:89] "coredns-66bc5c9577-kkb9q" [6805946e-e64c-415b-a6f1-19805e922ac3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:42.427875  297886 system_pods.go:89] "etcd-custom-flannel-110992" [bf9e0d1f-c4e0-4340-9067-f38717ad1042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:33:42.427887  297886 system_pods.go:89] "kube-apiserver-custom-flannel-110992" [81be8308-6eca-4a7d-9f5a-4fde9f404a09] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:42.427905  297886 system_pods.go:89] "kube-controller-manager-custom-flannel-110992" [13b42f4d-1a3c-4db1-8545-ba3e87ec36ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:33:42.427911  297886 system_pods.go:89] "kube-proxy-gwnzk" [18472f90-803d-4f35-bb24-cf57ce13fdec] Running
	I1026 08:33:42.427919  297886 system_pods.go:89] "kube-scheduler-custom-flannel-110992" [21522f96-d4a4-412f-85bb-455424aec270] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:33:42.427924  297886 system_pods.go:89] "storage-provisioner" [432c5dd5-7ff6-49f8-a143-3aa8178f7def] Running
	I1026 08:33:42.427943  297886 retry.go:31] will retry after 466.978013ms: missing components: kube-dns
	I1026 08:33:42.899525  297886 system_pods.go:86] 7 kube-system pods found
	I1026 08:33:42.899555  297886 system_pods.go:89] "coredns-66bc5c9577-kkb9q" [6805946e-e64c-415b-a6f1-19805e922ac3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:42.899562  297886 system_pods.go:89] "etcd-custom-flannel-110992" [bf9e0d1f-c4e0-4340-9067-f38717ad1042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:33:42.899582  297886 system_pods.go:89] "kube-apiserver-custom-flannel-110992" [81be8308-6eca-4a7d-9f5a-4fde9f404a09] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:42.899588  297886 system_pods.go:89] "kube-controller-manager-custom-flannel-110992" [13b42f4d-1a3c-4db1-8545-ba3e87ec36ba] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1026 08:33:42.899594  297886 system_pods.go:89] "kube-proxy-gwnzk" [18472f90-803d-4f35-bb24-cf57ce13fdec] Running
	I1026 08:33:42.899599  297886 system_pods.go:89] "kube-scheduler-custom-flannel-110992" [21522f96-d4a4-412f-85bb-455424aec270] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:33:42.899602  297886 system_pods.go:89] "storage-provisioner" [432c5dd5-7ff6-49f8-a143-3aa8178f7def] Running
	I1026 08:33:42.899625  297886 retry.go:31] will retry after 533.371191ms: missing components: kube-dns
	I1026 08:33:43.437209  297886 system_pods.go:86] 7 kube-system pods found
	I1026 08:33:43.437295  297886 system_pods.go:89] "coredns-66bc5c9577-kkb9q" [6805946e-e64c-415b-a6f1-19805e922ac3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:43.437308  297886 system_pods.go:89] "etcd-custom-flannel-110992" [bf9e0d1f-c4e0-4340-9067-f38717ad1042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:33:43.437325  297886 system_pods.go:89] "kube-apiserver-custom-flannel-110992" [81be8308-6eca-4a7d-9f5a-4fde9f404a09] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:43.437335  297886 system_pods.go:89] "kube-controller-manager-custom-flannel-110992" [13b42f4d-1a3c-4db1-8545-ba3e87ec36ba] Running
	I1026 08:33:43.437345  297886 system_pods.go:89] "kube-proxy-gwnzk" [18472f90-803d-4f35-bb24-cf57ce13fdec] Running
	I1026 08:33:43.437353  297886 system_pods.go:89] "kube-scheduler-custom-flannel-110992" [21522f96-d4a4-412f-85bb-455424aec270] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:33:43.437362  297886 system_pods.go:89] "storage-provisioner" [432c5dd5-7ff6-49f8-a143-3aa8178f7def] Running
	I1026 08:33:43.437380  297886 retry.go:31] will retry after 514.217174ms: missing components: kube-dns
	I1026 08:33:43.956184  297886 system_pods.go:86] 7 kube-system pods found
	I1026 08:33:43.956221  297886 system_pods.go:89] "coredns-66bc5c9577-kkb9q" [6805946e-e64c-415b-a6f1-19805e922ac3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:43.956231  297886 system_pods.go:89] "etcd-custom-flannel-110992" [bf9e0d1f-c4e0-4340-9067-f38717ad1042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:33:43.956262  297886 system_pods.go:89] "kube-apiserver-custom-flannel-110992" [81be8308-6eca-4a7d-9f5a-4fde9f404a09] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:43.956270  297886 system_pods.go:89] "kube-controller-manager-custom-flannel-110992" [13b42f4d-1a3c-4db1-8545-ba3e87ec36ba] Running
	I1026 08:33:43.956282  297886 system_pods.go:89] "kube-proxy-gwnzk" [18472f90-803d-4f35-bb24-cf57ce13fdec] Running
	I1026 08:33:43.956288  297886 system_pods.go:89] "kube-scheduler-custom-flannel-110992" [21522f96-d4a4-412f-85bb-455424aec270] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1026 08:33:43.956293  297886 system_pods.go:89] "storage-provisioner" [432c5dd5-7ff6-49f8-a143-3aa8178f7def] Running
	I1026 08:33:43.956313  297886 retry.go:31] will retry after 776.118762ms: missing components: kube-dns
	I1026 08:33:44.737544  297886 system_pods.go:86] 7 kube-system pods found
	I1026 08:33:44.737572  297886 system_pods.go:89] "coredns-66bc5c9577-kkb9q" [6805946e-e64c-415b-a6f1-19805e922ac3] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1026 08:33:44.737579  297886 system_pods.go:89] "etcd-custom-flannel-110992" [bf9e0d1f-c4e0-4340-9067-f38717ad1042] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1026 08:33:44.737605  297886 system_pods.go:89] "kube-apiserver-custom-flannel-110992" [81be8308-6eca-4a7d-9f5a-4fde9f404a09] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1026 08:33:44.737614  297886 system_pods.go:89] "kube-controller-manager-custom-flannel-110992" [13b42f4d-1a3c-4db1-8545-ba3e87ec36ba] Running
	I1026 08:33:44.737620  297886 system_pods.go:89] "kube-proxy-gwnzk" [18472f90-803d-4f35-bb24-cf57ce13fdec] Running
	I1026 08:33:44.737629  297886 system_pods.go:89] "kube-scheduler-custom-flannel-110992" [21522f96-d4a4-412f-85bb-455424aec270] Running
	I1026 08:33:44.737633  297886 system_pods.go:89] "storage-provisioner" [432c5dd5-7ff6-49f8-a143-3aa8178f7def] Running
	I1026 08:33:44.737655  297886 retry.go:31] will retry after 753.539709ms: missing components: kube-dns
	
	
	==> CRI-O <==
	Oct 26 08:33:08 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:08.168706517Z" level=info msg="Started container" PID=1742 containerID=0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh/dashboard-metrics-scraper id=53337e31-d48f-4558-8e35-aea254d4e217 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8aac7ddd73687d9d220453227c3218a3b00f6b090b87c5750a57a24eb2c7e75
	Oct 26 08:33:08 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:08.248146176Z" level=info msg="Removing container: 2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605" id=904a2bd2-d1a9-4d43-afec-a9139b3ebc3a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:33:08 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:08.258141333Z" level=info msg="Removed container 2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh/dashboard-metrics-scraper" id=904a2bd2-d1a9-4d43-afec-a9139b3ebc3a name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.280387452Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d41d784b-ffac-49c7-84bc-10db4451ca41 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.281414676Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=872bd60f-c5df-4597-894b-f5e3d32800e9 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.282527614Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=65e42fc1-9165-4b20-ace7-552d38e7babf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.282661626Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.287203582Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.28740396Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/844a1d364e62b964561f9b90a385f91bfa2b5f7ad3658eb2ed6cbdca5369801c/merged/etc/passwd: no such file or directory"
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.287439753Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/844a1d364e62b964561f9b90a385f91bfa2b5f7ad3658eb2ed6cbdca5369801c/merged/etc/group: no such file or directory"
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.287689057Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.319521908Z" level=info msg="Created container 39400809d8a6cff3435ab7c9a9b30fec1761d9d6fd7481ca9c6efb4ba004e297: kube-system/storage-provisioner/storage-provisioner" id=65e42fc1-9165-4b20-ace7-552d38e7babf name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.320221894Z" level=info msg="Starting container: 39400809d8a6cff3435ab7c9a9b30fec1761d9d6fd7481ca9c6efb4ba004e297" id=2fbde231-157b-4d97-aa09-598cb6487a7e name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:33:18 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:18.322345082Z" level=info msg="Started container" PID=1756 containerID=39400809d8a6cff3435ab7c9a9b30fec1761d9d6fd7481ca9c6efb4ba004e297 description=kube-system/storage-provisioner/storage-provisioner id=2fbde231-157b-4d97-aa09-598cb6487a7e name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e0500a55d16f3240e1530d778381b1ce7b563d4e2b3577e4026b4140ae15509
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.12453094Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=d66c0621-7e22-4471-87c6-5aec11bbfcfb name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.126918145Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ce8bc235-28aa-4cdf-8105-cf3c65a2d865 name=/runtime.v1.ImageService/ImageStatus
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.128116424Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh/dashboard-metrics-scraper" id=3f77d080-4036-4838-8e96-16f3322718a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.128644206Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.137617117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.138351704Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.185826204Z" level=info msg="Created container 26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh/dashboard-metrics-scraper" id=3f77d080-4036-4838-8e96-16f3322718a1 name=/runtime.v1.RuntimeService/CreateContainer
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.186683291Z" level=info msg="Starting container: 26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8" id=6ddbd748-da5b-42a9-af5c-da3d385a3073 name=/runtime.v1.RuntimeService/StartContainer
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.18904219Z" level=info msg="Started container" PID=1792 containerID=26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh/dashboard-metrics-scraper id=6ddbd748-da5b-42a9-af5c-da3d385a3073 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8aac7ddd73687d9d220453227c3218a3b00f6b090b87c5750a57a24eb2c7e75
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.321101515Z" level=info msg="Removing container: 0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438" id=0450c7a3-605d-44b5-b949-3f1bbb311940 name=/runtime.v1.RuntimeService/RemoveContainer
	Oct 26 08:33:31 default-k8s-diff-port-866212 crio[559]: time="2025-10-26T08:33:31.331441872Z" level=info msg="Removed container 0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh/dashboard-metrics-scraper" id=0450c7a3-605d-44b5-b949-3f1bbb311940 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	26835d1b859d2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           15 seconds ago       Exited              dashboard-metrics-scraper   3                   f8aac7ddd7368       dashboard-metrics-scraper-6ffb444bf9-qshwh             kubernetes-dashboard
	39400809d8a6c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   3e0500a55d16f       storage-provisioner                                    kube-system
	13ab62cadeb68       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago       Running             kubernetes-dashboard        0                   44cbfdd9ae912       kubernetes-dashboard-855c9754f9-wb2rv                  kubernetes-dashboard
	00f90ace4d071       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           58 seconds ago       Running             coredns                     0                   cea90ad51cacd       coredns-66bc5c9577-h4dk5                               kube-system
	d8ad19ac7d6a1       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   c32cc8ba5eb58       busybox                                                default
	5e9a95956c5c1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   3e0500a55d16f       storage-provisioner                                    kube-system
	7e8addd91064c       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           58 seconds ago       Running             kube-proxy                  0                   bb404d54ed538       kube-proxy-m4gfc                                       kube-system
	0c1cd2bcf70ca       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   11f4e915b425c       kindnet-vr7fg                                          kube-system
	bac6e251286c0       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   a09d1a459eb18       kube-apiserver-default-k8s-diff-port-866212            kube-system
	fea0de012ed14       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   ea56d5cd626c2       kube-controller-manager-default-k8s-diff-port-866212   kube-system
	2c7535c22bfef       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   1956428ee36f9       kube-scheduler-default-k8s-diff-port-866212            kube-system
	f179309133864       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   8898076bacecd       etcd-default-k8s-diff-port-866212                      kube-system
	
	
	==> coredns [00f90ace4d0713082578d2953d41522061d3d60ac732cf7c7fec764994fed345] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54114 - 16400 "HINFO IN 5970486570999536228.5468883746339438641. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.04690717s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-866212
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-866212
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=7bff0055abe294a06ae9b3b2dd6f86bacf87f0d4
	                    minikube.k8s.io/name=default-k8s-diff-port-866212
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_10_26T08_31_49_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sun, 26 Oct 2025 08:31:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-866212
	  AcquireTime:     <unset>
	  RenewTime:       Sun, 26 Oct 2025 08:33:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sun, 26 Oct 2025 08:33:17 +0000   Sun, 26 Oct 2025 08:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sun, 26 Oct 2025 08:33:17 +0000   Sun, 26 Oct 2025 08:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sun, 26 Oct 2025 08:33:17 +0000   Sun, 26 Oct 2025 08:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sun, 26 Oct 2025 08:33:17 +0000   Sun, 26 Oct 2025 08:32:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    default-k8s-diff-port-866212
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863360Ki
	  pods:               110
	System Info:
	  Machine ID:                 98aac72b9abe9f06f1b9b38568f5cc96
	  System UUID:                35b0b8af-89ca-40c6-acd5-1ad4f6cfade6
	  Boot ID:                    4a921cc4-d54e-41d6-a6d6-fc946eb5d83d
	  Kernel Version:             6.8.0-1042-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 coredns-66bc5c9577-h4dk5                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     112s
	  kube-system                 etcd-default-k8s-diff-port-866212                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         118s
	  kube-system                 kindnet-vr7fg                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-default-k8s-diff-port-866212             250m (3%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-866212    200m (2%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 kube-proxy-m4gfc                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-default-k8s-diff-port-866212             100m (1%)     0 (0%)      0 (0%)           0 (0%)         118s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-qshwh              0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wb2rv                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 111s               kube-proxy       
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasSufficientMemory  118s               kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s               kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s               kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasSufficientPID
	  Normal  Starting                 118s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           113s               node-controller  Node default-k8s-diff-port-866212 event: Registered Node default-k8s-diff-port-866212 in Controller
	  Normal  NodeReady                101s               kubelet          Node default-k8s-diff-port-866212 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node default-k8s-diff-port-866212 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           56s                node-controller  Node default-k8s-diff-port-866212 event: Registered Node default-k8s-diff-port-866212 in Controller
	
	
	==> dmesg <==
	[  +0.093611] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.026606] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.414486] kauditd_printk_skb: 47 callbacks suppressed
	[Oct26 07:50] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.059230] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.024914] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.022937] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023902] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +1.023932] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +2.047830] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000009] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +4.031719] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[  +8.063469] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[Oct26 07:51] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	[ +32.253687] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: e6 42 c4 1a fb 74 ca 57 8b ae b4 3c 08 00
	
	
	==> etcd [f1793091338642d5b5aa05b444ce27113423e5b31e8531e922ed908abb8f7ed4] <==
	{"level":"warn","ts":"2025-10-26T08:32:46.056977Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38440","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-10-26T08:32:46.150342Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38470","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-10-26T08:32:53.834986Z","caller":"traceutil/trace.go:172","msg":"trace[1407613908] transaction","detail":"{read_only:false; response_revision:510; number_of_response:1; }","duration":"100.581594ms","start":"2025-10-26T08:32:53.734383Z","end":"2025-10-26T08:32:53.834965Z","steps":["trace[1407613908] 'process raft request'  (duration: 95.823431ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:54.395533Z","caller":"traceutil/trace.go:172","msg":"trace[384222494] transaction","detail":"{read_only:false; response_revision:521; number_of_response:1; }","duration":"133.747202ms","start":"2025-10-26T08:32:54.261757Z","end":"2025-10-26T08:32:54.395505Z","steps":["trace[384222494] 'process raft request'  (duration: 119.559134ms)","trace[384222494] 'compare'  (duration: 13.951784ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:32:54.673452Z","caller":"traceutil/trace.go:172","msg":"trace[2145136223] transaction","detail":"{read_only:false; response_revision:528; number_of_response:1; }","duration":"148.083117ms","start":"2025-10-26T08:32:54.525344Z","end":"2025-10-26T08:32:54.673427Z","steps":["trace[2145136223] 'process raft request'  (duration: 124.891463ms)","trace[2145136223] 'compare'  (duration: 23.080146ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:32:54.807648Z","caller":"traceutil/trace.go:172","msg":"trace[337547233] transaction","detail":"{read_only:false; response_revision:529; number_of_response:1; }","duration":"129.274979ms","start":"2025-10-26T08:32:54.678353Z","end":"2025-10-26T08:32:54.807628Z","steps":["trace[337547233] 'process raft request'  (duration: 122.996075ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:54.953909Z","caller":"traceutil/trace.go:172","msg":"trace[1335758568] linearizableReadLoop","detail":"{readStateIndex:555; appliedIndex:555; }","duration":"123.357278ms","start":"2025-10-26T08:32:54.830529Z","end":"2025-10-26T08:32:54.953887Z","steps":["trace[1335758568] 'read index received'  (duration: 123.347261ms)","trace[1335758568] 'applied index is now lower than readState.Index'  (duration: 8.941µs)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:54.976562Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"146.008597ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-h4dk5\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-10-26T08:32:54.976611Z","caller":"traceutil/trace.go:172","msg":"trace[1398264768] transaction","detail":"{read_only:false; response_revision:530; number_of_response:1; }","duration":"163.989049ms","start":"2025-10-26T08:32:54.812601Z","end":"2025-10-26T08:32:54.976590Z","steps":["trace[1398264768] 'process raft request'  (duration: 141.333406ms)","trace[1398264768] 'compare'  (duration: 22.521232ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:32:54.976652Z","caller":"traceutil/trace.go:172","msg":"trace[1615356878] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-h4dk5; range_end:; response_count:1; response_revision:529; }","duration":"146.116947ms","start":"2025-10-26T08:32:54.830516Z","end":"2025-10-26T08:32:54.976633Z","steps":["trace[1615356878] 'agreement among raft nodes before linearized reading'  (duration: 123.450054ms)","trace[1615356878] 'range keys from in-memory index tree'  (duration: 22.448588ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:32:55.116313Z","caller":"traceutil/trace.go:172","msg":"trace[1501113392] transaction","detail":"{read_only:false; response_revision:531; number_of_response:1; }","duration":"134.586941ms","start":"2025-10-26T08:32:54.981701Z","end":"2025-10-26T08:32:55.116288Z","steps":["trace[1501113392] 'process raft request'  (duration: 126.304001ms)"],"step_count":1}
	{"level":"warn","ts":"2025-10-26T08:32:55.373244Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"164.89849ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" limit:1 ","response":"range_response_count:1 size:4622"}
	{"level":"info","ts":"2025-10-26T08:32:55.373342Z","caller":"traceutil/trace.go:172","msg":"trace[2117944040] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh; range_end:; response_count:1; response_revision:531; }","duration":"165.001088ms","start":"2025-10-26T08:32:55.208319Z","end":"2025-10-26T08:32:55.373320Z","steps":["trace[2117944040] 'agreement among raft nodes before linearized reading'  (duration: 31.643016ms)","trace[2117944040] 'range keys from in-memory index tree'  (duration: 133.153088ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:55.373839Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.30513ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765741983537279 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/default-k8s-diff-port-866212.1871fd6ab9cbbdf5\" mod_revision:529 > success:<request_put:<key:\"/registry/events/default/default-k8s-diff-port-866212.1871fd6ab9cbbdf5\" value_size:690 lease:6571765741983537150 >> failure:<request_range:<key:\"/registry/events/default/default-k8s-diff-port-866212.1871fd6ab9cbbdf5\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T08:32:55.374032Z","caller":"traceutil/trace.go:172","msg":"trace[1033819592] transaction","detail":"{read_only:false; response_revision:532; number_of_response:1; }","duration":"252.903905ms","start":"2025-10-26T08:32:55.121110Z","end":"2025-10-26T08:32:55.374014Z","steps":["trace[1033819592] 'process raft request'  (duration: 118.867623ms)","trace[1033819592] 'compare'  (duration: 133.184913ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:55.704337Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.874059ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765741983537285 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" mod_revision:521 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" value_size:4630 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T08:32:55.704586Z","caller":"traceutil/trace.go:172","msg":"trace[209732539] transaction","detail":"{read_only:false; response_revision:535; number_of_response:1; }","duration":"227.459875ms","start":"2025-10-26T08:32:55.477110Z","end":"2025-10-26T08:32:55.704570Z","steps":["trace[209732539] 'process raft request'  (duration: 227.390868ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:32:55.704612Z","caller":"traceutil/trace.go:172","msg":"trace[956732398] transaction","detail":"{read_only:false; response_revision:534; number_of_response:1; }","duration":"322.978002ms","start":"2025-10-26T08:32:55.381614Z","end":"2025-10-26T08:32:55.704592Z","steps":["trace[956732398] 'process raft request'  (duration: 196.759946ms)","trace[956732398] 'compare'  (duration: 125.764303ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:55.704736Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-10-26T08:32:55.381594Z","time spent":"323.071424ms","remote":"127.0.0.1:37710","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4716,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" mod_revision:521 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" value_size:4630 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh\" > >"}
	{"level":"warn","ts":"2025-10-26T08:32:56.042413Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"211.813292ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-h4dk5\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-10-26T08:32:56.042483Z","caller":"traceutil/trace.go:172","msg":"trace[1984375349] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-h4dk5; range_end:; response_count:1; response_revision:538; }","duration":"211.893407ms","start":"2025-10-26T08:32:55.830574Z","end":"2025-10-26T08:32:56.042467Z","steps":["trace[1984375349] 'agreement among raft nodes before linearized reading'  (duration: 79.739368ms)","trace[1984375349] 'range keys from in-memory index tree'  (duration: 131.940805ms)"],"step_count":2}
	{"level":"warn","ts":"2025-10-26T08:32:56.042511Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.059214ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571765741983537290 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/etcd-default-k8s-diff-port-866212.1871fd6ada38ed9e\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/etcd-default-k8s-diff-port-866212.1871fd6ada38ed9e\" value_size:680 lease:6571765741983537150 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-10-26T08:32:56.042577Z","caller":"traceutil/trace.go:172","msg":"trace[15110038] transaction","detail":"{read_only:false; response_revision:539; number_of_response:1; }","duration":"260.00415ms","start":"2025-10-26T08:32:55.782561Z","end":"2025-10-26T08:32:56.042565Z","steps":["trace[15110038] 'process raft request'  (duration: 127.804942ms)","trace[15110038] 'compare'  (duration: 131.811814ms)"],"step_count":2}
	{"level":"info","ts":"2025-10-26T08:33:15.555005Z","caller":"traceutil/trace.go:172","msg":"trace[725905225] transaction","detail":"{read_only:false; response_revision:605; number_of_response:1; }","duration":"106.565383ms","start":"2025-10-26T08:33:15.448419Z","end":"2025-10-26T08:33:15.554984Z","steps":["trace[725905225] 'process raft request'  (duration: 106.410711ms)"],"step_count":1}
	{"level":"info","ts":"2025-10-26T08:33:15.681002Z","caller":"traceutil/trace.go:172","msg":"trace[166051861] transaction","detail":"{read_only:false; response_revision:606; number_of_response:1; }","duration":"229.324722ms","start":"2025-10-26T08:33:15.451654Z","end":"2025-10-26T08:33:15.680979Z","steps":["trace[166051861] 'process raft request'  (duration: 146.532613ms)","trace[166051861] 'compare'  (duration: 82.477977ms)"],"step_count":2}
	
	
	==> kernel <==
	 08:33:46 up  1:16,  0 user,  load average: 6.34, 4.67, 2.75
	Linux default-k8s-diff-port-866212 6.8.0-1042-gcp #45~22.04.1-Ubuntu SMP Tue Oct  7 19:06:40 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0c1cd2bcf70ca230d2e4cb79ce891591e75eaf36dc70ff2f6a1c60c061b036e1] <==
	I1026 08:32:47.752833       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1026 08:32:47.753204       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1026 08:32:47.753416       1 main.go:148] setting mtu 1500 for CNI 
	I1026 08:32:47.753439       1 main.go:178] kindnetd IP family: "ipv4"
	I1026 08:32:47.753465       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-10-26T08:32:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1026 08:32:47.955688       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1026 08:32:47.955727       1 controller.go:381] "Waiting for informer caches to sync"
	I1026 08:32:47.955741       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1026 08:32:47.956465       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1026 08:32:48.278759       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1026 08:32:48.278796       1 metrics.go:72] Registering metrics
	I1026 08:32:48.278890       1 controller.go:711] "Syncing nftables rules"
	I1026 08:32:57.956162       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:32:57.956223       1 main.go:301] handling current node
	I1026 08:33:07.957451       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:33:07.957494       1 main.go:301] handling current node
	I1026 08:33:17.956436       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:33:17.956471       1 main.go:301] handling current node
	I1026 08:33:27.955967       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:33:27.956079       1 main.go:301] handling current node
	I1026 08:33:37.957871       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1026 08:33:37.957918       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bac6e251286c0426a8d66c24d98eec9378377f39d55baba7bda5c9b9d7aa2fdd] <==
	I1026 08:32:46.735834       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1026 08:32:46.735947       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1026 08:32:46.737289       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1026 08:32:46.737329       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1026 08:32:46.737465       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1026 08:32:46.737646       1 aggregator.go:171] initial CRD sync complete...
	I1026 08:32:46.738285       1 autoregister_controller.go:144] Starting autoregister controller
	I1026 08:32:46.738456       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1026 08:32:46.738465       1 cache.go:39] Caches are synced for autoregister controller
	I1026 08:32:46.741706       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1026 08:32:46.741735       1 policy_source.go:240] refreshing policies
	E1026 08:32:46.743518       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1026 08:32:46.765882       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1026 08:32:46.771925       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1026 08:32:47.068567       1 controller.go:667] quota admission added evaluator for: namespaces
	I1026 08:32:47.101537       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1026 08:32:47.127114       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1026 08:32:47.134743       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1026 08:32:47.142534       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1026 08:32:47.187670       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.109.123.141"}
	I1026 08:32:47.200481       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.28.151"}
	I1026 08:32:47.635202       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1026 08:32:50.556299       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1026 08:32:50.651747       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1026 08:32:50.703498       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [fea0de012ed14198cce29294a9f8a6de6b56997c95421d8dbd5059a83bc10c30] <==
	I1026 08:32:50.067748       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1026 08:32:50.073142       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:32:50.073161       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1026 08:32:50.073167       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1026 08:32:50.076343       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1026 08:32:50.099087       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1026 08:32:50.100361       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1026 08:32:50.100406       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1026 08:32:50.100627       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1026 08:32:50.100652       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1026 08:32:50.101707       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1026 08:32:50.101728       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1026 08:32:50.101752       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1026 08:32:50.101835       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1026 08:32:50.102157       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1026 08:32:50.106419       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:32:50.108564       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1026 08:32:50.117751       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1026 08:32:50.121094       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1026 08:32:50.123288       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1026 08:32:50.124502       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1026 08:32:50.127798       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1026 08:32:50.129069       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1026 08:32:50.131180       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1026 08:32:50.661202       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/dashboard-metrics-scraper" err="EndpointSlice informer cache is out of date"
	
	
	==> kube-proxy [7e8addd91064c0bf781cb95b46604edaa687aeffe8855673b88feb7b30405028] <==
	I1026 08:32:47.516278       1 server_linux.go:53] "Using iptables proxy"
	I1026 08:32:47.583441       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1026 08:32:47.684475       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1026 08:32:47.684527       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1026 08:32:47.684638       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1026 08:32:47.707631       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1026 08:32:47.707691       1 server_linux.go:132] "Using iptables Proxier"
	I1026 08:32:47.713975       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1026 08:32:47.714375       1 server.go:527] "Version info" version="v1.34.1"
	I1026 08:32:47.714411       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:32:47.716092       1 config.go:309] "Starting node config controller"
	I1026 08:32:47.716108       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1026 08:32:47.716239       1 config.go:403] "Starting serviceCIDR config controller"
	I1026 08:32:47.716282       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1026 08:32:47.716374       1 config.go:200] "Starting service config controller"
	I1026 08:32:47.716381       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1026 08:32:47.716397       1 config.go:106] "Starting endpoint slice config controller"
	I1026 08:32:47.716403       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1026 08:32:47.816651       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1026 08:32:47.816678       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1026 08:32:47.816694       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1026 08:32:47.816710       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [2c7535c22bfefd57d71740479f1db737373736089d752091b7f4c168c93f52e2] <==
	I1026 08:32:45.717603       1 serving.go:386] Generated self-signed cert in-memory
	W1026 08:32:46.656033       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1026 08:32:46.656072       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1026 08:32:46.656089       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1026 08:32:46.656098       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1026 08:32:46.700409       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1026 08:32:46.700499       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1026 08:32:46.703330       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:32:46.703389       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1026 08:32:46.703753       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1026 08:32:46.703785       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1026 08:32:46.803744       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Oct 26 08:32:55 default-k8s-diff-port-866212 kubelet[716]: I1026 08:32:55.206628     716 scope.go:117] "RemoveContainer" containerID="2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605"
	Oct 26 08:32:55 default-k8s-diff-port-866212 kubelet[716]: E1026 08:32:55.206870     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:32:56 default-k8s-diff-port-866212 kubelet[716]: I1026 08:32:56.002309     716 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Oct 26 08:32:56 default-k8s-diff-port-866212 kubelet[716]: I1026 08:32:56.212764     716 scope.go:117] "RemoveContainer" containerID="2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605"
	Oct 26 08:32:56 default-k8s-diff-port-866212 kubelet[716]: E1026 08:32:56.213487     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:32:57 default-k8s-diff-port-866212 kubelet[716]: I1026 08:32:57.215049     716 scope.go:117] "RemoveContainer" containerID="2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605"
	Oct 26 08:32:57 default-k8s-diff-port-866212 kubelet[716]: E1026 08:32:57.215569     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:33:01 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:01.703387     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wb2rv" podStartSLOduration=4.619975629 podStartE2EDuration="11.703366841s" podCreationTimestamp="2025-10-26 08:32:50 +0000 UTC" firstStartedPulling="2025-10-26 08:32:50.962280941 +0000 UTC m=+6.954672640" lastFinishedPulling="2025-10-26 08:32:58.045672155 +0000 UTC m=+14.038063852" observedRunningTime="2025-10-26 08:32:58.232645485 +0000 UTC m=+14.225037201" watchObservedRunningTime="2025-10-26 08:33:01.703366841 +0000 UTC m=+17.695758556"
	Oct 26 08:33:08 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:08.123289     716 scope.go:117] "RemoveContainer" containerID="2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605"
	Oct 26 08:33:08 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:08.246688     716 scope.go:117] "RemoveContainer" containerID="2d924e75523e65cf716108d4bc7c5a9a6154a1948ab7b93995bc1f30f5c18605"
	Oct 26 08:33:08 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:08.246894     716 scope.go:117] "RemoveContainer" containerID="0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438"
	Oct 26 08:33:08 default-k8s-diff-port-866212 kubelet[716]: E1026 08:33:08.247081     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:33:15 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:15.445205     716 scope.go:117] "RemoveContainer" containerID="0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438"
	Oct 26 08:33:15 default-k8s-diff-port-866212 kubelet[716]: E1026 08:33:15.445535     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:33:18 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:18.279928     716 scope.go:117] "RemoveContainer" containerID="5e9a95956c5c17dd1f03f2dbf5ceb7ebd79ac63c5243e8c40cb8511e2e4b6696"
	Oct 26 08:33:31 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:31.123819     716 scope.go:117] "RemoveContainer" containerID="0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438"
	Oct 26 08:33:31 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:31.319722     716 scope.go:117] "RemoveContainer" containerID="0d0a2e034a1b631383e713f73e8dcf5b0bd63b51bb99590c3b2571ddc16f7438"
	Oct 26 08:33:31 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:31.319931     716 scope.go:117] "RemoveContainer" containerID="26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8"
	Oct 26 08:33:31 default-k8s-diff-port-866212 kubelet[716]: E1026 08:33:31.320164     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:33:35 default-k8s-diff-port-866212 kubelet[716]: I1026 08:33:35.444957     716 scope.go:117] "RemoveContainer" containerID="26835d1b859d238b4dde18556dcace1b943ca48e24c9d1532d71b511072339a8"
	Oct 26 08:33:35 default-k8s-diff-port-866212 kubelet[716]: E1026 08:33:35.445216     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-qshwh_kubernetes-dashboard(45b5bbf0-9312-4b22-9c82-ce31766bbea9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-qshwh" podUID="45b5bbf0-9312-4b22-9c82-ce31766bbea9"
	Oct 26 08:33:41 default-k8s-diff-port-866212 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Oct 26 08:33:41 default-k8s-diff-port-866212 systemd[1]: kubelet.service: Deactivated successfully.
	Oct 26 08:33:41 default-k8s-diff-port-866212 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Oct 26 08:33:41 default-k8s-diff-port-866212 systemd[1]: kubelet.service: Consumed 1.915s CPU time.
	
	
	==> kubernetes-dashboard [13ab62cadeb682750cf6f3a123c69691223f42268b8d1a98b2bc848057e8445b] <==
	2025/10/26 08:32:58 Using namespace: kubernetes-dashboard
	2025/10/26 08:32:58 Using in-cluster config to connect to apiserver
	2025/10/26 08:32:58 Using secret token for csrf signing
	2025/10/26 08:32:58 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/10/26 08:32:58 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/10/26 08:32:58 Successful initial request to the apiserver, version: v1.34.1
	2025/10/26 08:32:58 Generating JWE encryption key
	2025/10/26 08:32:58 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/10/26 08:32:58 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/10/26 08:32:58 Initializing JWE encryption key from synchronized object
	2025/10/26 08:32:58 Creating in-cluster Sidecar client
	2025/10/26 08:32:58 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:32:58 Serving insecurely on HTTP port: 9090
	2025/10/26 08:33:28 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/10/26 08:32:58 Starting overwatch
	
	
	==> storage-provisioner [39400809d8a6cff3435ab7c9a9b30fec1761d9d6fd7481ca9c6efb4ba004e297] <==
	I1026 08:33:18.344981       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1026 08:33:18.345038       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1026 08:33:18.347340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:21.802768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:26.068035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:29.665842       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:32.719757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:35.742333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:35.746916       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:33:35.747086       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1026 08:33:35.747152       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"46264170-5b73-4301-a763-5e3adc5f609e", APIVersion:"v1", ResourceVersion:"632", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-866212_9639eabe-d358-4eab-8742-50c1661cd756 became leader
	I1026 08:33:35.747265       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-866212_9639eabe-d358-4eab-8742-50c1661cd756!
	W1026 08:33:35.749664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:35.753386       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1026 08:33:35.848180       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-866212_9639eabe-d358-4eab-8742-50c1661cd756!
	W1026 08:33:37.756939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:37.763137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:39.767637       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:39.773997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:41.776907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:41.780728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:43.783410       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:43.787345       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:45.790941       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1026 08:33:45.795974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [5e9a95956c5c17dd1f03f2dbf5ceb7ebd79ac63c5243e8c40cb8511e2e4b6696] <==
	I1026 08:32:47.484863       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1026 08:33:17.486710       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212: exit status 2 (332.580036ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-866212 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.47s)

                                                
                                    

Test pass (263/326)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.86
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 4.3
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.22
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 0.41
21 TestBinaryMirror 0.82
22 TestOffline 55.24
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
27 TestAddons/Setup 156.92
31 TestAddons/serial/GCPAuth/Namespaces 0.14
32 TestAddons/serial/GCPAuth/FakeCredentials 8.42
48 TestAddons/StoppedEnableDisable 18.58
49 TestCertOptions 27.28
50 TestCertExpiration 218.77
52 TestForceSystemdFlag 26.47
53 TestForceSystemdEnv 35.48
58 TestErrorSpam/setup 19.71
59 TestErrorSpam/start 0.68
60 TestErrorSpam/status 0.95
61 TestErrorSpam/pause 5.8
62 TestErrorSpam/unpause 5.34
63 TestErrorSpam/stop 2.6
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.84
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6.08
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.14
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.68
75 TestFunctional/serial/CacheCmd/cache/add_local 0.78
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.55
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.11
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 48.18
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.2
86 TestFunctional/serial/LogsFileCmd 1.23
87 TestFunctional/serial/InvalidService 4.01
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 6.49
91 TestFunctional/parallel/DryRun 0.61
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.15
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 24.74
101 TestFunctional/parallel/SSHCmd 0.55
102 TestFunctional/parallel/CpCmd 1.82
103 TestFunctional/parallel/MySQL 16.96
104 TestFunctional/parallel/FileSync 0.32
105 TestFunctional/parallel/CertSync 2.14
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.62
113 TestFunctional/parallel/License 0.27
115 TestFunctional/parallel/Version/short 0.06
116 TestFunctional/parallel/Version/components 0.47
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
121 TestFunctional/parallel/ImageCommands/ImageBuild 2.74
122 TestFunctional/parallel/ImageCommands/Setup 0.43
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
129 TestFunctional/parallel/ProfileCmd/profile_not_create 0.5
130 TestFunctional/parallel/ProfileCmd/profile_list 0.49
131 TestFunctional/parallel/ProfileCmd/profile_json_output 0.51
133 TestFunctional/parallel/MountCmd/any-port 6.47
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.53
137 TestFunctional/parallel/MountCmd/specific-port 1.75
138 TestFunctional/parallel/MountCmd/VerifyCleanup 1.51
140 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
141 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
143 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 7.19
144 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
145 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
149 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
150 TestFunctional/parallel/ServiceCmd/List 1.71
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 108.79
163 TestMultiControlPlane/serial/DeployApp 4.57
164 TestMultiControlPlane/serial/PingHostFromPods 1.03
165 TestMultiControlPlane/serial/AddWorkerNode 23.41
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.9
168 TestMultiControlPlane/serial/CopyFile 17.25
169 TestMultiControlPlane/serial/StopSecondaryNode 14.3
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.71
171 TestMultiControlPlane/serial/RestartSecondaryNode 9.21
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 104.4
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.52
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
176 TestMultiControlPlane/serial/StopCluster 41.14
177 TestMultiControlPlane/serial/RestartCluster 58.16
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
179 TestMultiControlPlane/serial/AddSecondaryNode 37.5
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.89
184 TestJSONOutput/start/Command 37.12
185 TestJSONOutput/start/Audit 0
187 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
191 TestJSONOutput/pause/Audit 0
193 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
197 TestJSONOutput/unpause/Audit 0
199 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
202 TestJSONOutput/stop/Command 6.14
203 TestJSONOutput/stop/Audit 0
205 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
206 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
207 TestErrorJSONOutput 0.23
209 TestKicCustomNetwork/create_custom_network 26.41
210 TestKicCustomNetwork/use_default_bridge_network 26.11
211 TestKicExistingNetwork 24.6
212 TestKicCustomSubnet 24.44
213 TestKicStaticIP 24.72
214 TestMainNoArgs 0.06
215 TestMinikubeProfile 50.22
218 TestMountStart/serial/StartWithMountFirst 5.55
219 TestMountStart/serial/VerifyMountFirst 0.27
220 TestMountStart/serial/StartWithMountSecond 5.16
221 TestMountStart/serial/VerifyMountSecond 0.27
222 TestMountStart/serial/DeleteFirst 1.7
223 TestMountStart/serial/VerifyMountPostDelete 0.27
224 TestMountStart/serial/Stop 1.24
225 TestMountStart/serial/RestartStopped 7.16
226 TestMountStart/serial/VerifyMountPostStop 0.27
229 TestMultiNode/serial/FreshStart2Nodes 65.05
230 TestMultiNode/serial/DeployApp2Nodes 3.95
231 TestMultiNode/serial/PingHostFrom2Pods 0.69
232 TestMultiNode/serial/AddNode 23.33
233 TestMultiNode/serial/MultiNodeLabels 0.06
234 TestMultiNode/serial/ProfileList 0.67
235 TestMultiNode/serial/CopyFile 9.84
236 TestMultiNode/serial/StopNode 2.27
237 TestMultiNode/serial/StartAfterStop 7.19
238 TestMultiNode/serial/RestartKeepsNodes 80.49
239 TestMultiNode/serial/DeleteNode 5.24
240 TestMultiNode/serial/StopMultiNode 30.35
241 TestMultiNode/serial/RestartMultiNode 44.58
242 TestMultiNode/serial/ValidateNameConflict 25.05
247 TestPreload 87.19
249 TestScheduledStopUnix 97.94
252 TestInsufficientStorage 9.58
253 TestRunningBinaryUpgrade 47.8
255 TestKubernetesUpgrade 302.42
256 TestMissingContainerUpgrade 88.22
258 TestPause/serial/Start 51.02
259 TestPause/serial/SecondStartNoReconfiguration 11.15
261 TestStoppedBinaryUpgrade/Setup 0.42
262 TestStoppedBinaryUpgrade/Upgrade 43.4
263 TestStoppedBinaryUpgrade/MinikubeLogs 1.01
272 TestNoKubernetes/serial/StartNoK8sWithVersion 0.11
273 TestNoKubernetes/serial/StartWithK8s 22.84
281 TestNetworkPlugins/group/false 3.81
286 TestStartStop/group/old-k8s-version/serial/FirstStart 50.98
287 TestNoKubernetes/serial/StartWithStopK8s 19.26
288 TestNoKubernetes/serial/Start 4.71
289 TestNoKubernetes/serial/VerifyK8sNotRunning 0.29
290 TestNoKubernetes/serial/ProfileList 18.47
292 TestStartStop/group/no-preload/serial/FirstStart 51.65
293 TestNoKubernetes/serial/Stop 1.51
294 TestNoKubernetes/serial/StartNoArgs 7.25
295 TestStartStop/group/old-k8s-version/serial/DeployApp 9.52
296 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.32
298 TestStartStop/group/embed-certs/serial/FirstStart 40.47
300 TestStartStop/group/old-k8s-version/serial/Stop 16
301 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
302 TestStartStop/group/old-k8s-version/serial/SecondStart 42.54
303 TestStartStop/group/no-preload/serial/DeployApp 7.29
305 TestStartStop/group/no-preload/serial/Stop 16.71
306 TestStartStop/group/embed-certs/serial/DeployApp 8.25
308 TestStartStop/group/embed-certs/serial/Stop 16.41
309 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
310 TestStartStop/group/no-preload/serial/SecondStart 45.85
311 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
312 TestStartStop/group/embed-certs/serial/SecondStart 50.09
313 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
314 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.1
315 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
318 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 40.67
319 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
320 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
321 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.28
324 TestStartStop/group/newest-cni/serial/FirstStart 30.05
325 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
326 TestNetworkPlugins/group/auto/Start 39.81
327 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
328 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.27
330 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.3
331 TestNetworkPlugins/group/kindnet/Start 73.02
333 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.52
334 TestStartStop/group/newest-cni/serial/DeployApp 0
336 TestStartStop/group/newest-cni/serial/Stop 2.52
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
338 TestStartStop/group/newest-cni/serial/SecondStart 11.56
339 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.28
340 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.13
341 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
343 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
345 TestNetworkPlugins/group/auto/KubeletFlags 0.35
346 TestNetworkPlugins/group/auto/NetCatPod 9.24
347 TestNetworkPlugins/group/calico/Start 54.49
348 TestNetworkPlugins/group/auto/DNS 0.13
349 TestNetworkPlugins/group/auto/Localhost 0.14
350 TestNetworkPlugins/group/auto/HairPin 0.1
351 TestNetworkPlugins/group/custom-flannel/Start 50.63
352 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
353 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
354 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 6.07
355 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
356 TestNetworkPlugins/group/kindnet/NetCatPod 9.18
357 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.25
359 TestNetworkPlugins/group/calico/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/DNS 0.12
361 TestNetworkPlugins/group/kindnet/Localhost 0.09
362 TestNetworkPlugins/group/kindnet/HairPin 0.09
363 TestNetworkPlugins/group/enable-default-cni/Start 65.18
364 TestNetworkPlugins/group/calico/KubeletFlags 0.33
365 TestNetworkPlugins/group/calico/NetCatPod 8.22
366 TestNetworkPlugins/group/calico/DNS 0.12
367 TestNetworkPlugins/group/calico/Localhost 0.11
368 TestNetworkPlugins/group/calico/HairPin 0.11
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.24
371 TestNetworkPlugins/group/flannel/Start 52.87
372 TestNetworkPlugins/group/custom-flannel/DNS 0.12
373 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
374 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
375 TestNetworkPlugins/group/bridge/Start 70.55
376 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
377 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.18
378 TestNetworkPlugins/group/flannel/ControllerPod 6
379 TestNetworkPlugins/group/enable-default-cni/DNS 0.11
380 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
381 TestNetworkPlugins/group/enable-default-cni/HairPin 0.08
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
383 TestNetworkPlugins/group/flannel/NetCatPod 9.17
384 TestNetworkPlugins/group/flannel/DNS 0.11
385 TestNetworkPlugins/group/flannel/Localhost 0.09
386 TestNetworkPlugins/group/flannel/HairPin 0.1
387 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
388 TestNetworkPlugins/group/bridge/NetCatPod 9.19
389 TestNetworkPlugins/group/bridge/DNS 0.1
390 TestNetworkPlugins/group/bridge/Localhost 0.08
391 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (4.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-095815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-095815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.863164538s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1026 07:47:14.490753   12921 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1026 07:47:14.490849   12921 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-095815
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-095815: exit status 85 (67.205438ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-095815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-095815 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 07:47:09
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 07:47:09.677285   12933 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:47:09.677530   12933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:09.677539   12933 out.go:374] Setting ErrFile to fd 2...
	I1026 07:47:09.677542   12933 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:09.677727   12933 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	W1026 07:47:09.677839   12933 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21772-9429/.minikube/config/config.json: open /home/jenkins/minikube-integration/21772-9429/.minikube/config/config.json: no such file or directory
	I1026 07:47:09.678366   12933 out.go:368] Setting JSON to true
	I1026 07:47:09.679184   12933 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1781,"bootTime":1761463049,"procs":202,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 07:47:09.679279   12933 start.go:141] virtualization: kvm guest
	I1026 07:47:09.681270   12933 out.go:99] [download-only-095815] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1026 07:47:09.681894   12933 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball: no such file or directory
	I1026 07:47:09.681925   12933 notify.go:220] Checking for updates...
	I1026 07:47:09.683066   12933 out.go:171] MINIKUBE_LOCATION=21772
	I1026 07:47:09.684342   12933 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 07:47:09.686275   12933 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 07:47:09.687632   12933 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 07:47:09.688692   12933 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1026 07:47:09.690584   12933 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 07:47:09.690844   12933 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 07:47:09.715600   12933 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 07:47:09.715662   12933 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 07:47:10.117842   12933 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-26 07:47:10.105594604 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 07:47:10.117974   12933 docker.go:318] overlay module found
	I1026 07:47:10.119389   12933 out.go:99] Using the docker driver based on user configuration
	I1026 07:47:10.119412   12933 start.go:305] selected driver: docker
	I1026 07:47:10.119417   12933 start.go:925] validating driver "docker" against <nil>
	I1026 07:47:10.119486   12933 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 07:47:10.177913   12933 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:64 SystemTime:2025-10-26 07:47:10.168414995 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 07:47:10.178075   12933 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 07:47:10.178596   12933 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1026 07:47:10.178765   12933 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 07:47:10.180439   12933 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-095815 host does not exist
	  To start a cluster, run: "minikube start -p download-only-095815"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-095815
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (4.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-460564 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-460564 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.30053144s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (4.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1026 07:47:19.225395   12921 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1026 07:47:19.225426   12921 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21772-9429/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-460564
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-460564: exit status 85 (70.954342ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-095815 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-095815 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ delete  │ -p download-only-095815                                                                                                                                                   │ download-only-095815 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │ 26 Oct 25 07:47 UTC │
	│ start   │ -o=json --download-only -p download-only-460564 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-460564 │ jenkins │ v1.37.0 │ 26 Oct 25 07:47 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/10/26 07:47:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1026 07:47:14.978971   13286 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:47:14.979212   13286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:14.979221   13286 out.go:374] Setting ErrFile to fd 2...
	I1026 07:47:14.979225   13286 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:47:14.979425   13286 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:47:14.979855   13286 out.go:368] Setting JSON to true
	I1026 07:47:14.980603   13286 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1786,"bootTime":1761463049,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 07:47:14.980685   13286 start.go:141] virtualization: kvm guest
	I1026 07:47:14.982609   13286 out.go:99] [download-only-460564] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 07:47:14.982749   13286 notify.go:220] Checking for updates...
	I1026 07:47:14.983996   13286 out.go:171] MINIKUBE_LOCATION=21772
	I1026 07:47:14.985341   13286 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 07:47:14.986466   13286 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 07:47:14.987552   13286 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 07:47:14.988807   13286 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1026 07:47:14.991457   13286 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1026 07:47:14.991655   13286 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 07:47:15.014083   13286 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 07:47:15.014181   13286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 07:47:15.070132   13286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-26 07:47:15.061272742 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 07:47:15.070267   13286 docker.go:318] overlay module found
	I1026 07:47:15.071733   13286 out.go:99] Using the docker driver based on user configuration
	I1026 07:47:15.071758   13286 start.go:305] selected driver: docker
	I1026 07:47:15.071763   13286 start.go:925] validating driver "docker" against <nil>
	I1026 07:47:15.071843   13286 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 07:47:15.124964   13286 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-10-26 07:47:15.115294473 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 07:47:15.125154   13286 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1026 07:47:15.125685   13286 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1026 07:47:15.125842   13286 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1026 07:47:15.127517   13286 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-460564 host does not exist
	  To start a cluster, run: "minikube start -p download-only-460564"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-460564
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.41s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-893358 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-893358" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-893358
--- PASS: TestDownloadOnlyKic (0.41s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1026 07:47:20.345497   12921 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-916619 --alsologtostderr --binary-mirror http://127.0.0.1:36125 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-916619" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-916619
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (55.24s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-486469 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-486469 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (50.277194997s)
helpers_test.go:175: Cleaning up "offline-crio-486469" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-486469
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-486469: (4.967528001s)
--- PASS: TestOffline (55.24s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-610291
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-610291: exit status 85 (65.333557ms)

                                                
                                                
-- stdout --
	* Profile "addons-610291" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-610291"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-610291
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-610291: exit status 85 (66.284999ms)

                                                
                                                
-- stdout --
	* Profile "addons-610291" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-610291"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (156.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-610291 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-610291 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m36.921898272s)
--- PASS: TestAddons/Setup (156.92s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-610291 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-610291 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-610291 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-610291 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [1807b2d1-eb55-43a0-bcf7-e56cbd0c5cbc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [1807b2d1-eb55-43a0-bcf7-e56cbd0c5cbc] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.002981445s
addons_test.go:694: (dbg) Run:  kubectl --context addons-610291 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-610291 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-610291 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.58s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-610291
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-610291: (18.299090448s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-610291
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-610291
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-610291
--- PASS: TestAddons/StoppedEnableDisable (18.58s)

                                                
                                    
x
+
TestCertOptions (27.28s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-344588 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1026 08:28:01.797455   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-344588 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (24.187170237s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-344588 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-344588 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-344588 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-344588" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-344588
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-344588: (2.426448806s)
--- PASS: TestCertOptions (27.28s)

                                                
                                    
x
+
TestCertExpiration (218.77s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-535689 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-535689 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (30.677286992s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-535689 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-535689 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.628658912s)
helpers_test.go:175: Cleaning up "cert-expiration-535689" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-535689
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-535689: (2.461911026s)
--- PASS: TestCertExpiration (218.77s)

                                                
                                    
x
+
TestForceSystemdFlag (26.47s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-689178 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-689178 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (23.741335028s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-689178 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-689178" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-689178
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-689178: (2.430676307s)
--- PASS: TestForceSystemdFlag (26.47s)

                                                
                                    
x
+
TestForceSystemdEnv (35.48s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-519045 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-519045 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (32.959536264s)
helpers_test.go:175: Cleaning up "force-systemd-env-519045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-519045
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-519045: (2.52394837s)
--- PASS: TestForceSystemdEnv (35.48s)

                                                
                                    
x
+
TestErrorSpam/setup (19.71s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-509301 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-509301 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-509301 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-509301 --driver=docker  --container-runtime=crio: (19.713743812s)
--- PASS: TestErrorSpam/setup (19.71s)

                                                
                                    
x
+
TestErrorSpam/start (0.68s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 start --dry-run
--- PASS: TestErrorSpam/start (0.68s)

                                                
                                    
x
+
TestErrorSpam/status (0.95s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 status
--- PASS: TestErrorSpam/status (0.95s)

                                                
                                    
x
+
TestErrorSpam/pause (5.8s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 pause: exit status 80 (2.352936628s)

                                                
                                                
-- stdout --
	* Pausing node nospam-509301 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:53:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 pause: exit status 80 (1.560495343s)

                                                
                                                
-- stdout --
	* Pausing node nospam-509301 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:53:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 pause: exit status 80 (1.887079389s)

                                                
                                                
-- stdout --
	* Pausing node nospam-509301 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:53:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.80s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.34s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 unpause: exit status 80 (1.846917369s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-509301 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:53:41Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 unpause: exit status 80 (1.486184009s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-509301 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:53:42Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 unpause: exit status 80 (2.005725969s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-509301 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-10-26T07:53:44Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.34s)

                                                
                                    
x
+
TestErrorSpam/stop (2.6s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 stop: (2.399799418s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-509301 --log_dir /tmp/nospam-509301 stop
--- PASS: TestErrorSpam/stop (2.60s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21772-9429/.minikube/files/etc/test/nested/copy/12921/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.84s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-852274 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-852274 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.844526804s)
--- PASS: TestFunctional/serial/StartWithProxy (37.84s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.08s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1026 07:54:30.300286   12921 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-852274 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-852274 --alsologtostderr -v=8: (6.078865676s)
functional_test.go:678: soft start took 6.079623956s for "functional-852274" cluster.
I1026 07:54:36.379584   12921 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.08s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-852274 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.68s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.68s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.78s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-852274 /tmp/TestFunctionalserialCacheCmdcacheadd_local3939267722/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 cache add minikube-local-cache-test:functional-852274
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 cache delete minikube-local-cache-test:functional-852274
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-852274
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.78s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (285.456967ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.55s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 kubectl -- --context functional-852274 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-852274 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (48.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-852274 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1026 07:54:58.727688   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:54:58.734134   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:54:58.745547   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:54:58.767009   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:54:58.808422   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:54:58.889967   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:54:59.051533   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:54:59.373212   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:55:00.015355   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:55:01.296986   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:55:03.859908   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:55:08.981452   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:55:19.223099   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-852274 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (48.179769369s)
functional_test.go:776: restart took 48.17987718s for "functional-852274" cluster.
I1026 07:55:30.504573   12921 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (48.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-852274 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-852274 logs: (1.196612388s)
--- PASS: TestFunctional/serial/LogsCmd (1.20s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 logs --file /tmp/TestFunctionalserialLogsFileCmd3876621765/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-852274 logs --file /tmp/TestFunctionalserialLogsFileCmd3876621765/001/logs.txt: (1.226659997s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.23s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.01s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-852274 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-852274
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-852274: exit status 115 (339.268688ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30592 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-852274 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.01s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 config get cpus: exit status 14 (82.286457ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 config get cpus: exit status 14 (81.621905ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (6.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-852274 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-852274 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 49637: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (6.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-852274 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-852274 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (214.119809ms)

                                                
                                                
-- stdout --
	* [functional-852274] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:55:43.685911   48109 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:55:43.686022   48109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:55:43.686032   48109 out.go:374] Setting ErrFile to fd 2...
	I1026 07:55:43.686037   48109 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:55:43.686371   48109 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:55:43.687021   48109 out.go:368] Setting JSON to false
	I1026 07:55:43.688331   48109 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2295,"bootTime":1761463049,"procs":221,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 07:55:43.688411   48109 start.go:141] virtualization: kvm guest
	I1026 07:55:43.691598   48109 out.go:179] * [functional-852274] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 07:55:43.694136   48109 notify.go:220] Checking for updates...
	I1026 07:55:43.694162   48109 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 07:55:43.695950   48109 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 07:55:43.697704   48109 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 07:55:43.699456   48109 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 07:55:43.701383   48109 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 07:55:43.703073   48109 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 07:55:43.705872   48109 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:55:43.706585   48109 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 07:55:43.737356   48109 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 07:55:43.737462   48109 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 07:55:43.813185   48109 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-26 07:55:43.799926436 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 07:55:43.813365   48109 docker.go:318] overlay module found
	I1026 07:55:43.816256   48109 out.go:179] * Using the docker driver based on existing profile
	I1026 07:55:43.817779   48109 start.go:305] selected driver: docker
	I1026 07:55:43.817804   48109 start.go:925] validating driver "docker" against &{Name:functional-852274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-852274 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 07:55:43.817939   48109 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 07:55:43.820401   48109 out.go:203] 
	W1026 07:55:43.822041   48109 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1026 07:55:43.823591   48109 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-852274 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-852274 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-852274 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (184.839136ms)

                                                
                                                
-- stdout --
	* [functional-852274] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 07:55:45.987842   49256 out.go:360] Setting OutFile to fd 1 ...
	I1026 07:55:45.987934   49256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:55:45.987940   49256 out.go:374] Setting ErrFile to fd 2...
	I1026 07:55:45.987947   49256 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 07:55:45.988301   49256 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 07:55:45.988808   49256 out.go:368] Setting JSON to false
	I1026 07:55:45.990012   49256 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":2297,"bootTime":1761463049,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 07:55:45.990122   49256 start.go:141] virtualization: kvm guest
	I1026 07:55:45.992281   49256 out.go:179] * [functional-852274] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1026 07:55:45.993894   49256 notify.go:220] Checking for updates...
	I1026 07:55:45.993921   49256 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 07:55:45.995290   49256 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 07:55:45.996615   49256 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 07:55:45.998163   49256 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 07:55:45.999492   49256 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 07:55:46.000779   49256 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 07:55:46.002415   49256 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 07:55:46.002944   49256 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 07:55:46.034928   49256 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 07:55:46.035049   49256 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 07:55:46.103435   49256 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-10-26 07:55:46.08889079 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 07:55:46.103528   49256 docker.go:318] overlay module found
	I1026 07:55:46.105117   49256 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1026 07:55:46.106567   49256 start.go:305] selected driver: docker
	I1026 07:55:46.106587   49256 start.go:925] validating driver "docker" against &{Name:functional-852274 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1760939008-21773@sha256:d8d8a3f29f027433bea12764bddd1aa26c7ad9bb912e016c1bc51278db1343d8 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-852274 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1026 07:55:46.106694   49256 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 07:55:46.108368   49256 out.go:203] 
	W1026 07:55:46.109668   49256 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1026 07:55:46.110826   49256 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.15s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (24.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [d639148b-5244-4ef9-8670-271962f9ca38] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003276923s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-852274 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-852274 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-852274 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-852274 apply -f testdata/storage-provisioner/pod.yaml
I1026 07:55:58.957591   12921 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [778e54fe-b089-4e8f-8eea-9e94c5321ee6] Pending
helpers_test.go:352: "sp-pod" [778e54fe-b089-4e8f-8eea-9e94c5321ee6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [778e54fe-b089-4e8f-8eea-9e94c5321ee6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.003708942s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-852274 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-852274 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-852274 delete -f testdata/storage-provisioner/pod.yaml: (1.086146102s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-852274 apply -f testdata/storage-provisioner/pod.yaml
I1026 07:56:10.251775   12921 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [ee7286fc-8b8c-4f09-8e1c-cda44510b715] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [ee7286fc-8b8c-4f09-8e1c-cda44510b715] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0036922s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-852274 exec sp-pod -- ls /tmp/mount
E1026 07:56:20.667050   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:57:42.589199   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 07:59:58.718572   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:00:26.430979   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:04:58.718379   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (24.74s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh -n functional-852274 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 cp functional-852274:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1073316723/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh -n functional-852274 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh -n functional-852274 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (16.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-852274 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-xdb42" [be19c4ce-977e-4ebd-aa7d-48a54edd4ed2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-xdb42" [be19c4ce-977e-4ebd-aa7d-48a54edd4ed2] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 12.003709772s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-852274 exec mysql-5bb876957f-xdb42 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-852274 exec mysql-5bb876957f-xdb42 -- mysql -ppassword -e "show databases;": exit status 1 (124.040898ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1026 07:55:50.592601   12921 retry.go:31] will retry after 1.427374958s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-852274 exec mysql-5bb876957f-xdb42 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-852274 exec mysql-5bb876957f-xdb42 -- mysql -ppassword -e "show databases;": exit status 1 (96.83707ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1026 07:55:52.117307   12921 retry.go:31] will retry after 1.622451237s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-852274 exec mysql-5bb876957f-xdb42 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-852274 exec mysql-5bb876957f-xdb42 -- mysql -ppassword -e "show databases;": exit status 1 (94.117206ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1026 07:55:53.834792   12921 retry.go:31] will retry after 1.306653945s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-852274 exec mysql-5bb876957f-xdb42 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (16.96s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/12921/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "sudo cat /etc/test/nested/copy/12921/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/12921.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "sudo cat /etc/ssl/certs/12921.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/12921.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "sudo cat /usr/share/ca-certificates/12921.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "sudo cat /etc/ssl/certs/51391683.0"
E1026 07:55:39.705409   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/129212.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "sudo cat /etc/ssl/certs/129212.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/129212.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "sudo cat /usr/share/ca-certificates/129212.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.14s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-852274 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 ssh "sudo systemctl is-active docker": exit status 1 (321.9717ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 ssh "sudo systemctl is-active containerd": exit status 1 (295.545449ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-852274 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-852274 image ls --format short --alsologtostderr:
I1026 07:56:03.268098   53160 out.go:360] Setting OutFile to fd 1 ...
I1026 07:56:03.268337   53160 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 07:56:03.268345   53160 out.go:374] Setting ErrFile to fd 2...
I1026 07:56:03.268349   53160 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 07:56:03.268545   53160 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
I1026 07:56:03.269066   53160 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 07:56:03.269153   53160 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 07:56:03.269503   53160 cli_runner.go:164] Run: docker container inspect functional-852274 --format={{.State.Status}}
I1026 07:56:03.288508   53160 ssh_runner.go:195] Run: systemctl --version
I1026 07:56:03.288554   53160 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-852274
I1026 07:56:03.305526   53160 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/functional-852274/id_rsa Username:docker}
I1026 07:56:03.404063   53160 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-852274 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ localhost/my-image                      │ functional-852274  │ 5cdf020915f74 │ 1.47MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ docker.io/library/nginx                 │ alpine             │ 5e7abcdd20216 │ 54.2MB │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/nginx                 │ latest             │ 657fdcd1c3659 │ 155MB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-852274 image ls --format table --alsologtostderr:
I1026 07:56:06.684450   53855 out.go:360] Setting OutFile to fd 1 ...
I1026 07:56:06.684749   53855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 07:56:06.684761   53855 out.go:374] Setting ErrFile to fd 2...
I1026 07:56:06.684767   53855 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 07:56:06.684980   53855 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
I1026 07:56:06.685577   53855 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 07:56:06.685694   53855 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 07:56:06.686100   53855 cli_runner.go:164] Run: docker container inspect functional-852274 --format={{.State.Status}}
I1026 07:56:06.704423   53855 ssh_runner.go:195] Run: systemctl --version
I1026 07:56:06.704481   53855 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-852274
I1026 07:56:06.721770   53855 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/functional-852274/id_rsa Username:docker}
I1026 07:56:06.820373   53855 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-852274 image ls --format json --alsologtostderr:
[{"id":"5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5","repoDigests":["docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22","docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54168570"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"5cdf020915f7464fa8f7b420ce634dfd1b99fc0f5fc7a91b3e63ac1bdf963a7b","repoDigests":["localhost/my-image@sha256:a79774e33f0027192e4e4218610fa64a9ad17235a3154ca87f999ecad43a6267"],"repoTags":["localhost/my-image:functional-852274"],"size":"1468744"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d876
48e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s
.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"e30f02c4aa42049b7bd0f74486e7c1f11a80eb9b8c95ff033145a363606f07e6","repoDigests":["docker.io/library/fd57332d2b30d773a707b8f470da353697e1e6334704f3188ae2ba1d26b66b6e-tmp@sha256:ba6b511d74efead337af0c92b47660e9b03d736b6d44c52f9c9d3dc3dc46b7c9"],"repoTags":[],"size":"1466130"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05
d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-s
cheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b
908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab","repoDigests":["docker.io/library/nginx@sha256:029d4461bd98f124e53
1380505ceea2072418fdf28752aa73b7b273ba3048903","docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8"],"repoTags":["docker.io/library/nginx:latest"],"size":"155467611"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-852274 image ls --format json --alsologtostderr:
I1026 07:56:06.457293   53800 out.go:360] Setting OutFile to fd 1 ...
I1026 07:56:06.457529   53800 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 07:56:06.457538   53800 out.go:374] Setting ErrFile to fd 2...
I1026 07:56:06.457542   53800 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 07:56:06.457732   53800 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
I1026 07:56:06.458310   53800 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 07:56:06.458407   53800 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 07:56:06.458767   53800 cli_runner.go:164] Run: docker container inspect functional-852274 --format={{.State.Status}}
I1026 07:56:06.476733   53800 ssh_runner.go:195] Run: systemctl --version
I1026 07:56:06.476778   53800 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-852274
I1026 07:56:06.493744   53800 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/functional-852274/id_rsa Username:docker}
I1026 07:56:06.592129   53800 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-852274 image ls --format yaml --alsologtostderr:
- id: e30f02c4aa42049b7bd0f74486e7c1f11a80eb9b8c95ff033145a363606f07e6
repoDigests:
- docker.io/library/fd57332d2b30d773a707b8f470da353697e1e6334704f3188ae2ba1d26b66b6e-tmp@sha256:ba6b511d74efead337af0c92b47660e9b03d736b6d44c52f9c9d3dc3dc46b7c9
repoTags: []
size: "1466130"
- id: 657fdcd1c3659cf57cfaa13f40842e0a26b49ec9654d48fdefee9fc8259b4aab
repoDigests:
- docker.io/library/nginx@sha256:029d4461bd98f124e531380505ceea2072418fdf28752aa73b7b273ba3048903
- docker.io/library/nginx@sha256:7e034cabf67d95246a996a3b92ad1c49c20d81526c9d7ba982aead057a0606e8
repoTags:
- docker.io/library/nginx:latest
size: "155467611"
- id: 5cdf020915f7464fa8f7b420ce634dfd1b99fc0f5fc7a91b3e63ac1bdf963a7b
repoDigests:
- localhost/my-image@sha256:a79774e33f0027192e4e4218610fa64a9ad17235a3154ca87f999ecad43a6267
repoTags:
- localhost/my-image:functional-852274
size: "1468744"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 5e7abcdd20216bbeedf1369529564ffd60f830ed3540c477938ca580b645dff5
repoDigests:
- docker.io/library/nginx@sha256:61e01287e546aac28a3f56839c136b31f590273f3b41187a36f46f6a03bbfe22
- docker.io/library/nginx@sha256:b03ccb7431a2e3172f5cbae96d82bd792935f33ecb88fbf2940559e475745c4e
repoTags:
- docker.io/library/nginx:alpine
size: "54168570"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-852274 image ls --format yaml --alsologtostderr:
I1026 07:56:06.232546   53747 out.go:360] Setting OutFile to fd 1 ...
I1026 07:56:06.232843   53747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 07:56:06.232854   53747 out.go:374] Setting ErrFile to fd 2...
I1026 07:56:06.232858   53747 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 07:56:06.233075   53747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
I1026 07:56:06.233658   53747 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 07:56:06.233752   53747 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 07:56:06.234124   53747 cli_runner.go:164] Run: docker container inspect functional-852274 --format={{.State.Status}}
I1026 07:56:06.251920   53747 ssh_runner.go:195] Run: systemctl --version
I1026 07:56:06.251969   53747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-852274
I1026 07:56:06.269687   53747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/functional-852274/id_rsa Username:docker}
I1026 07:56:06.368042   53747 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 ssh pgrep buildkitd: exit status 1 (271.622951ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image build -t localhost/my-image:functional-852274 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-852274 image build -t localhost/my-image:functional-852274 testdata/build --alsologtostderr: (2.239348277s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-852274 image build -t localhost/my-image:functional-852274 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> e30f02c4aa4
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-852274
--> 5cdf020915f
Successfully tagged localhost/my-image:functional-852274
5cdf020915f7464fa8f7b420ce634dfd1b99fc0f5fc7a91b3e63ac1bdf963a7b
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-852274 image build -t localhost/my-image:functional-852274 testdata/build --alsologtostderr:
I1026 07:56:03.765629   53337 out.go:360] Setting OutFile to fd 1 ...
I1026 07:56:03.765787   53337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 07:56:03.765798   53337 out.go:374] Setting ErrFile to fd 2...
I1026 07:56:03.765802   53337 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1026 07:56:03.766051   53337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
I1026 07:56:03.766671   53337 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 07:56:03.767367   53337 config.go:182] Loaded profile config "functional-852274": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1026 07:56:03.767908   53337 cli_runner.go:164] Run: docker container inspect functional-852274 --format={{.State.Status}}
I1026 07:56:03.785894   53337 ssh_runner.go:195] Run: systemctl --version
I1026 07:56:03.785937   53337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-852274
I1026 07:56:03.802962   53337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/functional-852274/id_rsa Username:docker}
I1026 07:56:03.901046   53337 build_images.go:161] Building image from path: /tmp/build.2796413608.tar
I1026 07:56:03.901137   53337 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1026 07:56:03.909324   53337 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2796413608.tar
I1026 07:56:03.912933   53337 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2796413608.tar: stat -c "%s %y" /var/lib/minikube/build/build.2796413608.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2796413608.tar': No such file or directory
I1026 07:56:03.912962   53337 ssh_runner.go:362] scp /tmp/build.2796413608.tar --> /var/lib/minikube/build/build.2796413608.tar (3072 bytes)
I1026 07:56:03.930594   53337 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2796413608
I1026 07:56:03.938107   53337 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2796413608 -xf /var/lib/minikube/build/build.2796413608.tar
I1026 07:56:03.946341   53337 crio.go:315] Building image: /var/lib/minikube/build/build.2796413608
I1026 07:56:03.946399   53337 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-852274 /var/lib/minikube/build/build.2796413608 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1026 07:56:05.928400   53337 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-852274 /var/lib/minikube/build/build.2796413608 --cgroup-manager=cgroupfs: (1.981981022s)
I1026 07:56:05.928467   53337 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2796413608
I1026 07:56:05.936477   53337 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2796413608.tar
I1026 07:56:05.944231   53337 build_images.go:217] Built localhost/my-image:functional-852274 from /tmp/build.2796413608.tar
I1026 07:56:05.944275   53337 build_images.go:133] succeeded building to: functional-852274
I1026 07:56:05.944282   53337 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-852274
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "412.42457ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "78.676255ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "423.216198ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "84.287882ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-852274 /tmp/TestFunctionalparallelMountCmdany-port2988333111/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1761465344224608194" to /tmp/TestFunctionalparallelMountCmdany-port2988333111/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1761465344224608194" to /tmp/TestFunctionalparallelMountCmdany-port2988333111/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1761465344224608194" to /tmp/TestFunctionalparallelMountCmdany-port2988333111/001/test-1761465344224608194
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (338.261903ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 07:55:44.563282   12921 retry.go:31] will retry after 601.515911ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Oct 26 07:55 created-by-test
-rw-r--r-- 1 docker docker 24 Oct 26 07:55 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Oct 26 07:55 test-1761465344224608194
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh cat /mount-9p/test-1761465344224608194
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-852274 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [3bfa60bc-443c-4476-915f-dd098026a08a] Pending
helpers_test.go:352: "busybox-mount" [3bfa60bc-443c-4476-915f-dd098026a08a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [3bfa60bc-443c-4476-915f-dd098026a08a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [3bfa60bc-443c-4476-915f-dd098026a08a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.003790272s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-852274 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-852274 /tmp/TestFunctionalparallelMountCmdany-port2988333111/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.47s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image rm kicbase/echo-server:functional-852274 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-852274 /tmp/TestFunctionalparallelMountCmdspecific-port1920066353/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (321.71513ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 07:55:51.014219   12921 retry.go:31] will retry after 362.922253ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-852274 /tmp/TestFunctionalparallelMountCmdspecific-port1920066353/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 ssh "sudo umount -f /mount-9p": exit status 1 (282.704944ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-852274 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-852274 /tmp/TestFunctionalparallelMountCmdspecific-port1920066353/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
2025/10/26 07:55:52 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.75s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-852274 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3979632893/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-852274 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3979632893/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-852274 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3979632893/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-852274 ssh "findmnt -T" /mount1: exit status 1 (341.619679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1026 07:55:52.788161   12921 retry.go:31] will retry after 298.278506ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-852274 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-852274 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3979632893/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-852274 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3979632893/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-852274 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3979632893/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.51s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-852274 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-852274 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-852274 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 51989: os: process already finished
helpers_test.go:519: unable to terminate pid 51808: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-852274 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-852274 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (7.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-852274 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [f2f24134-1e64-4b07-9283-ea9d9f3c9518] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [f2f24134-1e64-4b07-9283-ea9d9f3c9518] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 7.00381554s
I1026 07:56:02.099049   12921 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (7.19s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-852274 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.180.49 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-852274 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-852274 service list: (1.707352281s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.71s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-852274 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-852274 service list -o json: (1.704101716s)
functional_test.go:1504: Took "1.704180127s" to run "out/minikube-linux-amd64 -p functional-852274 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-852274
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-852274
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-852274
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (108.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-706953 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m48.047305881s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (108.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-706953 kubectl -- rollout status deployment/busybox: (2.748125566s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-5s8gg -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-f2m5m -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-ngjgk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-5s8gg -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-f2m5m -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-ngjgk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-5s8gg -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-f2m5m -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-ngjgk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-5s8gg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-5s8gg -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-f2m5m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-f2m5m -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-ngjgk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 kubectl -- exec busybox-7b57f96db7-ngjgk -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (23.41s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-706953 node add --alsologtostderr -v 5: (22.51997516s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (23.41s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-706953 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (17.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp testdata/cp-test.txt ha-706953:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile437586094/001/cp-test_ha-706953.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953:/home/docker/cp-test.txt ha-706953-m02:/home/docker/cp-test_ha-706953_ha-706953-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m02 "sudo cat /home/docker/cp-test_ha-706953_ha-706953-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953:/home/docker/cp-test.txt ha-706953-m03:/home/docker/cp-test_ha-706953_ha-706953-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m03 "sudo cat /home/docker/cp-test_ha-706953_ha-706953-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953:/home/docker/cp-test.txt ha-706953-m04:/home/docker/cp-test_ha-706953_ha-706953-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m04 "sudo cat /home/docker/cp-test_ha-706953_ha-706953-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp testdata/cp-test.txt ha-706953-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile437586094/001/cp-test_ha-706953-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953-m02:/home/docker/cp-test.txt ha-706953:/home/docker/cp-test_ha-706953-m02_ha-706953.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953 "sudo cat /home/docker/cp-test_ha-706953-m02_ha-706953.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953-m02:/home/docker/cp-test.txt ha-706953-m03:/home/docker/cp-test_ha-706953-m02_ha-706953-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m03 "sudo cat /home/docker/cp-test_ha-706953-m02_ha-706953-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953-m02:/home/docker/cp-test.txt ha-706953-m04:/home/docker/cp-test_ha-706953-m02_ha-706953-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m04 "sudo cat /home/docker/cp-test_ha-706953-m02_ha-706953-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp testdata/cp-test.txt ha-706953-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile437586094/001/cp-test_ha-706953-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953-m03:/home/docker/cp-test.txt ha-706953:/home/docker/cp-test_ha-706953-m03_ha-706953.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953 "sudo cat /home/docker/cp-test_ha-706953-m03_ha-706953.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953-m03:/home/docker/cp-test.txt ha-706953-m02:/home/docker/cp-test_ha-706953-m03_ha-706953-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m02 "sudo cat /home/docker/cp-test_ha-706953-m03_ha-706953-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953-m03:/home/docker/cp-test.txt ha-706953-m04:/home/docker/cp-test_ha-706953-m03_ha-706953-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m04 "sudo cat /home/docker/cp-test_ha-706953-m03_ha-706953-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp testdata/cp-test.txt ha-706953-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile437586094/001/cp-test_ha-706953-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953-m04:/home/docker/cp-test.txt ha-706953:/home/docker/cp-test_ha-706953-m04_ha-706953.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953 "sudo cat /home/docker/cp-test_ha-706953-m04_ha-706953.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953-m04:/home/docker/cp-test.txt ha-706953-m02:/home/docker/cp-test_ha-706953-m04_ha-706953-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m02 "sudo cat /home/docker/cp-test_ha-706953-m04_ha-706953-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 cp ha-706953-m04:/home/docker/cp-test.txt ha-706953-m03:/home/docker/cp-test_ha-706953-m04_ha-706953-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 ssh -n ha-706953-m03 "sudo cat /home/docker/cp-test_ha-706953-m04_ha-706953-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (17.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (14.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-706953 node stop m02 --alsologtostderr -v 5: (13.603123122s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-706953 status --alsologtostderr -v 5: exit status 7 (695.654255ms)

                                                
                                                
-- stdout --
	ha-706953
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-706953-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-706953-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-706953-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:08:51.194899   77670 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:08:51.195136   77670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:08:51.195147   77670 out.go:374] Setting ErrFile to fd 2...
	I1026 08:08:51.195151   77670 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:08:51.195349   77670 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:08:51.195522   77670 out.go:368] Setting JSON to false
	I1026 08:08:51.195556   77670 mustload.go:65] Loading cluster: ha-706953
	I1026 08:08:51.195696   77670 notify.go:220] Checking for updates...
	I1026 08:08:51.196068   77670 config.go:182] Loaded profile config "ha-706953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:08:51.196090   77670 status.go:174] checking status of ha-706953 ...
	I1026 08:08:51.196610   77670 cli_runner.go:164] Run: docker container inspect ha-706953 --format={{.State.Status}}
	I1026 08:08:51.217636   77670 status.go:371] ha-706953 host status = "Running" (err=<nil>)
	I1026 08:08:51.217683   77670 host.go:66] Checking if "ha-706953" exists ...
	I1026 08:08:51.218074   77670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-706953
	I1026 08:08:51.236367   77670 host.go:66] Checking if "ha-706953" exists ...
	I1026 08:08:51.236594   77670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:08:51.236640   77670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-706953
	I1026 08:08:51.254451   77670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/ha-706953/id_rsa Username:docker}
	I1026 08:08:51.353728   77670 ssh_runner.go:195] Run: systemctl --version
	I1026 08:08:51.360531   77670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:08:51.372758   77670 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:08:51.427333   77670 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-10-26 08:08:51.416638772 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:08:51.427861   77670 kubeconfig.go:125] found "ha-706953" server: "https://192.168.49.254:8443"
	I1026 08:08:51.427893   77670 api_server.go:166] Checking apiserver status ...
	I1026 08:08:51.427934   77670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:08:51.440342   77670 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup
	W1026 08:08:51.448626   77670 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1250/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:08:51.448677   77670 ssh_runner.go:195] Run: ls
	I1026 08:08:51.452383   77670 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1026 08:08:51.456635   77670 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1026 08:08:51.456656   77670 status.go:463] ha-706953 apiserver status = Running (err=<nil>)
	I1026 08:08:51.456665   77670 status.go:176] ha-706953 status: &{Name:ha-706953 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:08:51.456680   77670 status.go:174] checking status of ha-706953-m02 ...
	I1026 08:08:51.456897   77670 cli_runner.go:164] Run: docker container inspect ha-706953-m02 --format={{.State.Status}}
	I1026 08:08:51.474603   77670 status.go:371] ha-706953-m02 host status = "Stopped" (err=<nil>)
	I1026 08:08:51.474621   77670 status.go:384] host is not running, skipping remaining checks
	I1026 08:08:51.474627   77670 status.go:176] ha-706953-m02 status: &{Name:ha-706953-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:08:51.474651   77670 status.go:174] checking status of ha-706953-m03 ...
	I1026 08:08:51.474965   77670 cli_runner.go:164] Run: docker container inspect ha-706953-m03 --format={{.State.Status}}
	I1026 08:08:51.492654   77670 status.go:371] ha-706953-m03 host status = "Running" (err=<nil>)
	I1026 08:08:51.492676   77670 host.go:66] Checking if "ha-706953-m03" exists ...
	I1026 08:08:51.492947   77670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-706953-m03
	I1026 08:08:51.510973   77670 host.go:66] Checking if "ha-706953-m03" exists ...
	I1026 08:08:51.511216   77670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:08:51.511298   77670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-706953-m03
	I1026 08:08:51.528075   77670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/ha-706953-m03/id_rsa Username:docker}
	I1026 08:08:51.624697   77670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:08:51.637632   77670 kubeconfig.go:125] found "ha-706953" server: "https://192.168.49.254:8443"
	I1026 08:08:51.637658   77670 api_server.go:166] Checking apiserver status ...
	I1026 08:08:51.637690   77670 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:08:51.648817   77670 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup
	W1026 08:08:51.657042   77670 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1166/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:08:51.657090   77670 ssh_runner.go:195] Run: ls
	I1026 08:08:51.660729   77670 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1026 08:08:51.666571   77670 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1026 08:08:51.666599   77670 status.go:463] ha-706953-m03 apiserver status = Running (err=<nil>)
	I1026 08:08:51.666610   77670 status.go:176] ha-706953-m03 status: &{Name:ha-706953-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:08:51.666627   77670 status.go:174] checking status of ha-706953-m04 ...
	I1026 08:08:51.666948   77670 cli_runner.go:164] Run: docker container inspect ha-706953-m04 --format={{.State.Status}}
	I1026 08:08:51.684700   77670 status.go:371] ha-706953-m04 host status = "Running" (err=<nil>)
	I1026 08:08:51.684725   77670 host.go:66] Checking if "ha-706953-m04" exists ...
	I1026 08:08:51.685009   77670 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-706953-m04
	I1026 08:08:51.702209   77670 host.go:66] Checking if "ha-706953-m04" exists ...
	I1026 08:08:51.702551   77670 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:08:51.702600   77670 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-706953-m04
	I1026 08:08:51.721511   77670 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/ha-706953-m04/id_rsa Username:docker}
	I1026 08:08:51.818769   77670 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:08:51.831452   77670 status.go:176] ha-706953-m04 status: &{Name:ha-706953-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (14.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (9.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-706953 node start m02 --alsologtostderr -v 5: (8.25541678s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (9.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (104.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-706953 stop --alsologtostderr -v 5: (44.356510985s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 start --wait true --alsologtostderr -v 5
E1026 08:09:58.718798   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:10:37.183744   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:10:37.190120   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:10:37.201496   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:10:37.222865   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:10:37.264297   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:10:37.345705   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:10:37.507293   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:10:37.829032   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:10:38.471061   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:10:39.752416   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:10:42.314676   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-706953 start --wait true --alsologtostderr -v 5: (59.91141579s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (104.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.52s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 node delete m03 --alsologtostderr -v 5
E1026 08:10:47.436820   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-706953 node delete m03 --alsologtostderr -v 5: (9.718147538s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1026 08:10:57.678890   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (41.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 stop --alsologtostderr -v 5
E1026 08:11:18.161035   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:11:21.794422   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-706953 stop --alsologtostderr -v 5: (41.022771117s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-706953 status --alsologtostderr -v 5: exit status 7 (114.121926ms)

                                                
                                                
-- stdout --
	ha-706953
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-706953-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-706953-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:11:39.345620   91794 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:11:39.345916   91794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:11:39.345926   91794 out.go:374] Setting ErrFile to fd 2...
	I1026 08:11:39.345929   91794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:11:39.346214   91794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:11:39.346455   91794 out.go:368] Setting JSON to false
	I1026 08:11:39.346488   91794 mustload.go:65] Loading cluster: ha-706953
	I1026 08:11:39.346595   91794 notify.go:220] Checking for updates...
	I1026 08:11:39.347057   91794 config.go:182] Loaded profile config "ha-706953": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:11:39.347076   91794 status.go:174] checking status of ha-706953 ...
	I1026 08:11:39.347652   91794 cli_runner.go:164] Run: docker container inspect ha-706953 --format={{.State.Status}}
	I1026 08:11:39.368913   91794 status.go:371] ha-706953 host status = "Stopped" (err=<nil>)
	I1026 08:11:39.368949   91794 status.go:384] host is not running, skipping remaining checks
	I1026 08:11:39.368955   91794 status.go:176] ha-706953 status: &{Name:ha-706953 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:11:39.368981   91794 status.go:174] checking status of ha-706953-m02 ...
	I1026 08:11:39.369271   91794 cli_runner.go:164] Run: docker container inspect ha-706953-m02 --format={{.State.Status}}
	I1026 08:11:39.386102   91794 status.go:371] ha-706953-m02 host status = "Stopped" (err=<nil>)
	I1026 08:11:39.386125   91794 status.go:384] host is not running, skipping remaining checks
	I1026 08:11:39.386130   91794 status.go:176] ha-706953-m02 status: &{Name:ha-706953-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:11:39.386148   91794 status.go:174] checking status of ha-706953-m04 ...
	I1026 08:11:39.386405   91794 cli_runner.go:164] Run: docker container inspect ha-706953-m04 --format={{.State.Status}}
	I1026 08:11:39.403137   91794 status.go:371] ha-706953-m04 host status = "Stopped" (err=<nil>)
	I1026 08:11:39.403155   91794 status.go:384] host is not running, skipping remaining checks
	I1026 08:11:39.403160   91794 status.go:176] ha-706953-m04 status: &{Name:ha-706953-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (41.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (58.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1026 08:11:59.123311   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-706953 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (57.359519859s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (58.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (37.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-706953 node add --control-plane --alsologtostderr -v 5: (36.609189756s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-706953 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (37.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.89s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-677217 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-677217 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (37.11866357s)
--- PASS: TestJSONOutput/start/Command (37.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.14s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-677217 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-677217 --output=json --user=testUser: (6.144445151s)
--- PASS: TestJSONOutput/stop/Command (6.14s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-419495 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-419495 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.174533ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"481f6f21-da5a-4273-ad3e-4e767febbf2c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-419495] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"81d3393f-6c3b-4b7c-b42e-fe6a2786888f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21772"}}
	{"specversion":"1.0","id":"c822cd77-c146-4161-871c-7adba905ff74","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c7307848-9a8d-4a94-ab01-9c83c57925e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig"}}
	{"specversion":"1.0","id":"686953f7-bd57-42d1-a213-48b037b8d5d4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube"}}
	{"specversion":"1.0","id":"697a3b84-cf23-4eda-8dbe-8da074d23fa2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8684e551-5576-44e9-b00b-3d053aeae300","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f4c37984-49fe-4bb1-aa8d-f3d5403e08ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-419495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-419495
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.41s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-602740 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-602740 --network=: (24.258321624s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-602740" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-602740
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-602740: (2.1307822s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.41s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.11s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-811058 --network=bridge
E1026 08:14:58.719100   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-811058 --network=bridge: (24.09917873s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-811058" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-811058
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-811058: (1.993403583s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.11s)

                                                
                                    
x
+
TestKicExistingNetwork (24.6s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1026 08:15:10.143048   12921 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1026 08:15:10.159810   12921 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1026 08:15:10.159888   12921 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1026 08:15:10.159905   12921 cli_runner.go:164] Run: docker network inspect existing-network
W1026 08:15:10.176913   12921 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1026 08:15:10.176940   12921 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1026 08:15:10.176952   12921 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1026 08:15:10.177121   12921 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1026 08:15:10.193969   12921 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-c18b67b7e42d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:66:70:41:72:e4:6d} reservation:<nil>}
I1026 08:15:10.194325   12921 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019f10c0}
I1026 08:15:10.194358   12921 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1026 08:15:10.194398   12921 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1026 08:15:10.250509   12921 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-530668 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-530668 --network=existing-network: (22.466290056s)
helpers_test.go:175: Cleaning up "existing-network-530668" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-530668
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-530668: (1.993931174s)
I1026 08:15:34.727811   12921 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.60s)

                                                
                                    
x
+
TestKicCustomSubnet (24.44s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-234953 --subnet=192.168.60.0/24
E1026 08:15:37.184401   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-234953 --subnet=192.168.60.0/24: (22.258445637s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-234953 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-234953" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-234953
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-234953: (2.156593395s)
--- PASS: TestKicCustomSubnet (24.44s)

                                                
                                    
x
+
TestKicStaticIP (24.72s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-950664 --static-ip=192.168.200.200
E1026 08:16:04.887532   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-950664 --static-ip=192.168.200.200: (22.447780057s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-950664 ip
helpers_test.go:175: Cleaning up "static-ip-950664" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-950664
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-950664: (2.129346814s)
--- PASS: TestKicStaticIP (24.72s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.22s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-063715 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-063715 --driver=docker  --container-runtime=crio: (21.554421912s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-066495 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-066495 --driver=docker  --container-runtime=crio: (22.708118175s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-063715
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-066495
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-066495" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-066495
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-066495: (2.362575836s)
helpers_test.go:175: Cleaning up "first-063715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-063715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-063715: (2.369960332s)
--- PASS: TestMinikubeProfile (50.22s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-704763 --memory=3072 --mount-string /tmp/TestMountStartserial343597047/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-704763 --memory=3072 --mount-string /tmp/TestMountStartserial343597047/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.55014763s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-704763 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.16s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-715199 --memory=3072 --mount-string /tmp/TestMountStartserial343597047/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-715199 --memory=3072 --mount-string /tmp/TestMountStartserial343597047/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.161543853s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-715199 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.7s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-704763 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-704763 --alsologtostderr -v=5: (1.70435224s)
--- PASS: TestMountStart/serial/DeleteFirst (1.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-715199 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-715199
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-715199: (1.24343501s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.16s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-715199
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-715199: (6.156475063s)
--- PASS: TestMountStart/serial/RestartStopped (7.16s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-715199 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-575349 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-575349 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m4.573349757s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.05s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.95s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-575349 -- rollout status deployment/busybox: (2.400690747s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- exec busybox-7b57f96db7-7zxb2 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- exec busybox-7b57f96db7-r8k7x -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- exec busybox-7b57f96db7-7zxb2 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- exec busybox-7b57f96db7-r8k7x -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- exec busybox-7b57f96db7-7zxb2 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- exec busybox-7b57f96db7-r8k7x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.95s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- exec busybox-7b57f96db7-7zxb2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- exec busybox-7b57f96db7-7zxb2 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- exec busybox-7b57f96db7-r8k7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-575349 -- exec busybox-7b57f96db7-r8k7x -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-575349 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-575349 -v=5 --alsologtostderr: (22.68185246s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.33s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-575349 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.67s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.84s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 cp testdata/cp-test.txt multinode-575349:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 cp multinode-575349:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1211769162/001/cp-test_multinode-575349.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 cp multinode-575349:/home/docker/cp-test.txt multinode-575349-m02:/home/docker/cp-test_multinode-575349_multinode-575349-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349-m02 "sudo cat /home/docker/cp-test_multinode-575349_multinode-575349-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 cp multinode-575349:/home/docker/cp-test.txt multinode-575349-m03:/home/docker/cp-test_multinode-575349_multinode-575349-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349-m03 "sudo cat /home/docker/cp-test_multinode-575349_multinode-575349-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 cp testdata/cp-test.txt multinode-575349-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 cp multinode-575349-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1211769162/001/cp-test_multinode-575349-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 cp multinode-575349-m02:/home/docker/cp-test.txt multinode-575349:/home/docker/cp-test_multinode-575349-m02_multinode-575349.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349 "sudo cat /home/docker/cp-test_multinode-575349-m02_multinode-575349.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 cp multinode-575349-m02:/home/docker/cp-test.txt multinode-575349-m03:/home/docker/cp-test_multinode-575349-m02_multinode-575349-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349-m03 "sudo cat /home/docker/cp-test_multinode-575349-m02_multinode-575349-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 cp testdata/cp-test.txt multinode-575349-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 cp multinode-575349-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1211769162/001/cp-test_multinode-575349-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 cp multinode-575349-m03:/home/docker/cp-test.txt multinode-575349:/home/docker/cp-test_multinode-575349-m03_multinode-575349.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349 "sudo cat /home/docker/cp-test_multinode-575349-m03_multinode-575349.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 cp multinode-575349-m03:/home/docker/cp-test.txt multinode-575349-m02:/home/docker/cp-test_multinode-575349-m03_multinode-575349-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 ssh -n multinode-575349-m02 "sudo cat /home/docker/cp-test_multinode-575349-m03_multinode-575349-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.84s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.27s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-575349 node stop m03: (1.264868325s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-575349 status: exit status 7 (507.681987ms)

                                                
                                                
-- stdout --
	multinode-575349
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-575349-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-575349-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-575349 status --alsologtostderr: exit status 7 (497.728422ms)

                                                
                                                
-- stdout --
	multinode-575349
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-575349-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-575349-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:19:23.357942  151726 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:19:23.358211  151726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:19:23.358223  151726 out.go:374] Setting ErrFile to fd 2...
	I1026 08:19:23.358228  151726 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:19:23.358466  151726 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:19:23.358679  151726 out.go:368] Setting JSON to false
	I1026 08:19:23.358716  151726 mustload.go:65] Loading cluster: multinode-575349
	I1026 08:19:23.358829  151726 notify.go:220] Checking for updates...
	I1026 08:19:23.359162  151726 config.go:182] Loaded profile config "multinode-575349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:19:23.359175  151726 status.go:174] checking status of multinode-575349 ...
	I1026 08:19:23.359744  151726 cli_runner.go:164] Run: docker container inspect multinode-575349 --format={{.State.Status}}
	I1026 08:19:23.378402  151726 status.go:371] multinode-575349 host status = "Running" (err=<nil>)
	I1026 08:19:23.378430  151726 host.go:66] Checking if "multinode-575349" exists ...
	I1026 08:19:23.378771  151726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-575349
	I1026 08:19:23.396400  151726 host.go:66] Checking if "multinode-575349" exists ...
	I1026 08:19:23.396638  151726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:19:23.396671  151726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-575349
	I1026 08:19:23.415495  151726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/multinode-575349/id_rsa Username:docker}
	I1026 08:19:23.512542  151726 ssh_runner.go:195] Run: systemctl --version
	I1026 08:19:23.518775  151726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:19:23.530770  151726 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:19:23.587948  151726 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-10-26 08:19:23.577903363 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:19:23.588464  151726 kubeconfig.go:125] found "multinode-575349" server: "https://192.168.67.2:8443"
	I1026 08:19:23.588491  151726 api_server.go:166] Checking apiserver status ...
	I1026 08:19:23.588523  151726 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1026 08:19:23.599994  151726 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup
	W1026 08:19:23.608464  151726 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1253/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1026 08:19:23.608506  151726 ssh_runner.go:195] Run: ls
	I1026 08:19:23.612234  151726 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1026 08:19:23.616319  151726 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1026 08:19:23.616337  151726 status.go:463] multinode-575349 apiserver status = Running (err=<nil>)
	I1026 08:19:23.616346  151726 status.go:176] multinode-575349 status: &{Name:multinode-575349 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:19:23.616374  151726 status.go:174] checking status of multinode-575349-m02 ...
	I1026 08:19:23.616599  151726 cli_runner.go:164] Run: docker container inspect multinode-575349-m02 --format={{.State.Status}}
	I1026 08:19:23.635696  151726 status.go:371] multinode-575349-m02 host status = "Running" (err=<nil>)
	I1026 08:19:23.635717  151726 host.go:66] Checking if "multinode-575349-m02" exists ...
	I1026 08:19:23.636003  151726 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-575349-m02
	I1026 08:19:23.653564  151726 host.go:66] Checking if "multinode-575349-m02" exists ...
	I1026 08:19:23.653877  151726 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1026 08:19:23.653911  151726 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-575349-m02
	I1026 08:19:23.670232  151726 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21772-9429/.minikube/machines/multinode-575349-m02/id_rsa Username:docker}
	I1026 08:19:23.766613  151726 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1026 08:19:23.778945  151726 status.go:176] multinode-575349-m02 status: &{Name:multinode-575349-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:19:23.779024  151726 status.go:174] checking status of multinode-575349-m03 ...
	I1026 08:19:23.779302  151726 cli_runner.go:164] Run: docker container inspect multinode-575349-m03 --format={{.State.Status}}
	I1026 08:19:23.797054  151726 status.go:371] multinode-575349-m03 host status = "Stopped" (err=<nil>)
	I1026 08:19:23.797074  151726 status.go:384] host is not running, skipping remaining checks
	I1026 08:19:23.797080  151726 status.go:176] multinode-575349-m03 status: &{Name:multinode-575349-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.27s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-575349 node start m03 -v=5 --alsologtostderr: (6.487708795s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.19s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-575349
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-575349
E1026 08:19:58.719692   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-575349: (29.530540624s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-575349 --wait=true -v=5 --alsologtostderr
E1026 08:20:37.184343   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-575349 --wait=true -v=5 --alsologtostderr: (50.841505446s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-575349
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-575349 node delete m03: (4.646070931s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-575349 stop: (30.154102934s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-575349 status: exit status 7 (95.334758ms)

                                                
                                                
-- stdout --
	multinode-575349
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-575349-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-575349 status --alsologtostderr: exit status 7 (96.151476ms)

                                                
                                                
-- stdout --
	multinode-575349
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-575349-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:21:27.028305  161458 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:21:27.028424  161458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:21:27.028434  161458 out.go:374] Setting ErrFile to fd 2...
	I1026 08:21:27.028440  161458 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:21:27.028644  161458 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:21:27.028795  161458 out.go:368] Setting JSON to false
	I1026 08:21:27.028823  161458 mustload.go:65] Loading cluster: multinode-575349
	I1026 08:21:27.028938  161458 notify.go:220] Checking for updates...
	I1026 08:21:27.029136  161458 config.go:182] Loaded profile config "multinode-575349": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:21:27.029149  161458 status.go:174] checking status of multinode-575349 ...
	I1026 08:21:27.029596  161458 cli_runner.go:164] Run: docker container inspect multinode-575349 --format={{.State.Status}}
	I1026 08:21:27.049731  161458 status.go:371] multinode-575349 host status = "Stopped" (err=<nil>)
	I1026 08:21:27.049767  161458 status.go:384] host is not running, skipping remaining checks
	I1026 08:21:27.049775  161458 status.go:176] multinode-575349 status: &{Name:multinode-575349 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1026 08:21:27.049797  161458 status.go:174] checking status of multinode-575349-m02 ...
	I1026 08:21:27.050067  161458 cli_runner.go:164] Run: docker container inspect multinode-575349-m02 --format={{.State.Status}}
	I1026 08:21:27.067417  161458 status.go:371] multinode-575349-m02 host status = "Stopped" (err=<nil>)
	I1026 08:21:27.067434  161458 status.go:384] host is not running, skipping remaining checks
	I1026 08:21:27.067440  161458 status.go:176] multinode-575349-m02 status: &{Name:multinode-575349-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.35s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (44.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-575349 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-575349 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (43.978182754s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-575349 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (44.58s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-575349
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-575349-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-575349-m02 --driver=docker  --container-runtime=crio: exit status 14 (78.256636ms)

                                                
                                                
-- stdout --
	* [multinode-575349-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-575349-m02' is duplicated with machine name 'multinode-575349-m02' in profile 'multinode-575349'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-575349-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-575349-m03 --driver=docker  --container-runtime=crio: (22.237299891s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-575349
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-575349: exit status 80 (289.832834ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-575349 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-575349-m03 already exists in multinode-575349-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-575349-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-575349-m03: (2.385213349s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.05s)

                                                
                                    
x
+
TestPreload (87.19s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-244810 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-244810 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.689802665s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-244810 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-244810 image pull gcr.io/k8s-minikube/busybox: (1.626862277s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-244810
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-244810: (5.855734676s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-244810 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-244810 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (29.387733486s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-244810 image list
helpers_test.go:175: Cleaning up "test-preload-244810" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-244810
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-244810: (2.401223969s)
--- PASS: TestPreload (87.19s)

                                                
                                    
x
+
TestScheduledStopUnix (97.94s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-422857 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-422857 --memory=3072 --driver=docker  --container-runtime=crio: (22.185019291s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-422857 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-422857 -n scheduled-stop-422857
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-422857 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1026 08:24:30.686158   12921 retry.go:31] will retry after 112.357µs: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.687315   12921 retry.go:31] will retry after 153.009µs: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.688462   12921 retry.go:31] will retry after 167.116µs: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.689597   12921 retry.go:31] will retry after 281.782µs: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.690723   12921 retry.go:31] will retry after 527.683µs: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.691844   12921 retry.go:31] will retry after 1.025554ms: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.692966   12921 retry.go:31] will retry after 1.687932ms: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.695160   12921 retry.go:31] will retry after 1.325297ms: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.697369   12921 retry.go:31] will retry after 2.874097ms: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.700539   12921 retry.go:31] will retry after 2.738928ms: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.703743   12921 retry.go:31] will retry after 5.300212ms: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.709975   12921 retry.go:31] will retry after 6.56774ms: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.717170   12921 retry.go:31] will retry after 8.482207ms: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.726423   12921 retry.go:31] will retry after 9.780209ms: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.736731   12921 retry.go:31] will retry after 32.519062ms: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
I1026 08:24:30.770170   12921 retry.go:31] will retry after 56.043135ms: open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/scheduled-stop-422857/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-422857 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-422857 -n scheduled-stop-422857
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-422857
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-422857 --schedule 15s
E1026 08:24:58.719591   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
E1026 08:25:37.191136   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-422857
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-422857: exit status 7 (76.781251ms)

                                                
                                                
-- stdout --
	scheduled-stop-422857
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-422857 -n scheduled-stop-422857
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-422857 -n scheduled-stop-422857: exit status 7 (76.477337ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-422857" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-422857
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-422857: (4.255443101s)
--- PASS: TestScheduledStopUnix (97.94s)

                                                
                                    
x
+
TestInsufficientStorage (9.58s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-232115 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-232115 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.101083177s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"13cb4bc2-8d5d-49a9-8621-32ff7b58679c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-232115] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0bfebf3d-e4a0-4eb1-90e7-74d505ffb3c2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21772"}}
	{"specversion":"1.0","id":"cb51f1fd-cbd0-4faa-9387-9c4f143064b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"6ded740b-5a1a-426f-a566-35e20e8ed352","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig"}}
	{"specversion":"1.0","id":"a35f3071-3c30-444a-8c46-d1dc679ebf91","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube"}}
	{"specversion":"1.0","id":"215f0354-4647-4031-9018-6ee9dc8179d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f02e7183-f6da-4a39-ba2e-5c567e1e24f2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e0ca6e82-436c-4aad-9495-a8f258f0953a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"33a4c10b-716b-494f-a491-648d69c67e7f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"92087a64-8ac4-47ec-9d94-b63d3d387921","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3799317f-e042-4b00-9666-68e425826778","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"34ff5252-842e-4ff5-9798-ad8db1ddd06e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-232115\" primary control-plane node in \"insufficient-storage-232115\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"d83cad78-43c3-4cd8-a609-0606b2d7a5c6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1760939008-21773 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"909bf8ad-3273-4e37-9f28-8fcf61a02296","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"75d11c0a-0cff-45d1-b357-139e20ed831b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-232115 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-232115 --output=json --layout=cluster: exit status 7 (289.087408ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-232115","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-232115","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 08:25:53.369868  181615 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-232115" does not appear in /home/jenkins/minikube-integration/21772-9429/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-232115 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-232115 --output=json --layout=cluster: exit status 7 (287.151324ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-232115","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-232115","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1026 08:25:53.658106  181728 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-232115" does not appear in /home/jenkins/minikube-integration/21772-9429/kubeconfig
	E1026 08:25:53.668281  181728 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/insufficient-storage-232115/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-232115" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-232115
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-232115: (1.89687685s)
--- PASS: TestInsufficientStorage (9.58s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.8s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.2605325699 start -p running-upgrade-774916 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.2605325699 start -p running-upgrade-774916 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.512341169s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-774916 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-774916 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.342698458s)
helpers_test.go:175: Cleaning up "running-upgrade-774916" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-774916
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-774916: (2.453275622s)
--- PASS: TestRunningBinaryUpgrade (47.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (302.42s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (28.91137025s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-462840
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-462840: (1.875024808s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-462840 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-462840 status --format={{.Host}}: exit status 7 (93.683751ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m23.408764409s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-462840 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (98.496057ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-462840] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-462840
	    minikube start -p kubernetes-upgrade-462840 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4628402 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-462840 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-462840 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.413922701s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-462840" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-462840
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-462840: (2.539904645s)
--- PASS: TestKubernetesUpgrade (302.42s)

                                                
                                    
x
+
TestMissingContainerUpgrade (88.22s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.3184629529 start -p missing-upgrade-300975 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.3184629529 start -p missing-upgrade-300975 --memory=3072 --driver=docker  --container-runtime=crio: (40.805352875s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-300975
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-300975: (1.910122781s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-300975
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-300975 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-300975 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (40.14318337s)
helpers_test.go:175: Cleaning up "missing-upgrade-300975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-300975
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-300975: (4.816898419s)
--- PASS: TestMissingContainerUpgrade (88.22s)

                                                
                                    
x
+
TestPause/serial/Start (51.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-504806 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-504806 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (51.018393755s)
--- PASS: TestPause/serial/Start (51.02s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (11.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-504806 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-504806 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (11.137115081s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (11.15s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.42s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (43.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.3890952255 start -p stopped-upgrade-603429 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.3890952255 start -p stopped-upgrade-603429 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.285643084s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.3890952255 -p stopped-upgrade-603429 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.3890952255 -p stopped-upgrade-603429 stop: (4.877713159s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-603429 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-603429 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (17.232470047s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (43.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-603429
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-603429: (1.006301353s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.01s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-815548 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-815548 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (107.98544ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-815548] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (22.84s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-815548 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-815548 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (22.488417569s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-815548 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (22.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-110992 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-110992 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (162.84139ms)

                                                
                                                
-- stdout --
	* [false-110992] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21772
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1026 08:28:56.511351  226318 out.go:360] Setting OutFile to fd 1 ...
	I1026 08:28:56.511459  226318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:28:56.511471  226318 out.go:374] Setting ErrFile to fd 2...
	I1026 08:28:56.511477  226318 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1026 08:28:56.511702  226318 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21772-9429/.minikube/bin
	I1026 08:28:56.512143  226318 out.go:368] Setting JSON to false
	I1026 08:28:56.513321  226318 start.go:131] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4287,"bootTime":1761463049,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1042-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1026 08:28:56.513401  226318 start.go:141] virtualization: kvm guest
	I1026 08:28:56.515294  226318 out.go:179] * [false-110992] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1026 08:28:56.516495  226318 notify.go:220] Checking for updates...
	I1026 08:28:56.516502  226318 out.go:179]   - MINIKUBE_LOCATION=21772
	I1026 08:28:56.517878  226318 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1026 08:28:56.518932  226318 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21772-9429/kubeconfig
	I1026 08:28:56.520012  226318 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21772-9429/.minikube
	I1026 08:28:56.524445  226318 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1026 08:28:56.525457  226318 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1026 08:28:56.526837  226318 config.go:182] Loaded profile config "NoKubernetes-815548": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:28:56.526919  226318 config.go:182] Loaded profile config "cert-expiration-535689": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:28:56.527010  226318 config.go:182] Loaded profile config "kubernetes-upgrade-462840": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1026 08:28:56.527105  226318 driver.go:421] Setting default libvirt URI to qemu:///system
	I1026 08:28:56.552881  226318 docker.go:123] docker version: linux-28.5.1:Docker Engine - Community
	I1026 08:28:56.552965  226318 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1026 08:28:56.611378  226318 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-10-26 08:28:56.600214643 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1042-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652080640 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.2] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.44] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1026 08:28:56.611520  226318 docker.go:318] overlay module found
	I1026 08:28:56.612991  226318 out.go:179] * Using the docker driver based on user configuration
	I1026 08:28:56.613994  226318 start.go:305] selected driver: docker
	I1026 08:28:56.614007  226318 start.go:925] validating driver "docker" against <nil>
	I1026 08:28:56.614016  226318 start.go:936] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1026 08:28:56.615730  226318 out.go:203] 
	W1026 08:28:56.616729  226318 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1026 08:28:56.617826  226318 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-110992 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-110992

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-110992

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-110992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-110992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-110992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-110992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-110992

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-110992

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-110992

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-110992

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-110992

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-110992" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-110992" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:26:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-535689
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:27:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-462840
contexts:
- context:
cluster: cert-expiration-535689
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:26:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-535689
name: cert-expiration-535689
- context:
cluster: kubernetes-upgrade-462840
user: kubernetes-upgrade-462840
name: kubernetes-upgrade-462840
current-context: ""
kind: Config
users:
- name: cert-expiration-535689
user:
client-certificate: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/cert-expiration-535689/client.crt
client-key: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/cert-expiration-535689/client.key
- name: kubernetes-upgrade-462840
user:
client-certificate: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/client.crt
client-key: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-110992

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-110992"

                                                
                                                
----------------------- debugLogs end: false-110992 [took: 3.492362434s] --------------------------------
helpers_test.go:175: Cleaning up "false-110992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-110992
--- PASS: TestNetworkPlugins/group/false (3.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.980472269s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-815548 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-815548 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.902715849s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-815548 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-815548 status -o json: exit status 2 (316.889964ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-815548","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-815548
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-815548: (2.036629452s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.71s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-815548 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-815548 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.705967867s)
--- PASS: TestNoKubernetes/serial/Start (4.71s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-815548 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-815548 "sudo systemctl is-active --quiet service kubelet": exit status 1 (293.373687ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (18.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (17.553598759s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (18.47s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (51.65s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.653458842s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (51.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-815548
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-815548: (1.51033879s)
--- PASS: TestNoKubernetes/serial/Stop (1.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-815548 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-815548 --driver=docker  --container-runtime=crio: (7.245079397s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-810379 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c4b87aba-4af2-41ab-b0de-82f97987e1b5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c4b87aba-4af2-41ab-b0de-82f97987e1b5] Running
E1026 08:29:58.719239   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003588011s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-810379 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-815548 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-815548 "sudo systemctl is-active --quiet service kubelet": exit status 1 (322.421147ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (40.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.467821545s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (40.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-810379 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-810379 --alsologtostderr -v=3: (15.996334028s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-810379 -n old-k8s-version-810379
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-810379 -n old-k8s-version-810379: exit status 7 (83.070351ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-810379 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (42.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-810379 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (42.173070805s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-810379 -n old-k8s-version-810379
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (42.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-001983 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [3eb3e11d-988f-48b0-a678-67f786b283c9] Pending
helpers_test.go:352: "busybox" [3eb3e11d-988f-48b0-a678-67f786b283c9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [3eb3e11d-988f-48b0-a678-67f786b283c9] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004097633s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-001983 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-001983 --alsologtostderr -v=3
E1026 08:30:37.184533   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/functional-852274/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-001983 --alsologtostderr -v=3: (16.707134647s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.71s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-752315 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [5d90b3c9-8de3-47ec-b300-fda7d1a2dcf4] Pending
helpers_test.go:352: "busybox" [5d90b3c9-8de3-47ec-b300-fda7d1a2dcf4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [5d90b3c9-8de3-47ec-b300-fda7d1a2dcf4] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003542019s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-752315 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-752315 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-752315 --alsologtostderr -v=3: (16.411327972s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-001983 -n no-preload-001983
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-001983 -n no-preload-001983: exit status 7 (78.102433ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-001983 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (45.85s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-001983 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.49399507s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-001983 -n no-preload-001983
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (45.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-752315 -n embed-certs-752315
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-752315 -n embed-certs-752315: exit status 7 (85.378546ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-752315 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (50.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-752315 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (49.710271597s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-752315 -n embed-certs-752315
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (50.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7kfvh" [6b85d1f8-06ed-4998-bad2-19ba60a53a1f] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004033536s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-7kfvh" [6b85d1f8-06ed-4998-bad2-19ba60a53a1f] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004160637s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-810379 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-810379 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.67s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-866212 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-866212 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (40.667969254s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (40.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-48znz" [390e2ecb-697d-4556-824a-09e99b456a1a] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003879993s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-48znz" [390e2ecb-697d-4556-824a-09e99b456a1a] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003647696s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-001983 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-001983 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (30.05s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (30.04748528s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (30.05s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7m27d" [c2ba33f0-784d-4cd9-9324-324155d48377] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003227876s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.81s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.808965394s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.81s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7m27d" [c2ba33f0-784d-4cd9-9324-324155d48377] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003192787s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-752315 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-752315 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-866212 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [b5bb1b9e-b768-4cd4-94f6-e17e789dd4c0] Pending
helpers_test.go:352: "busybox" [b5bb1b9e-b768-4cd4-94f6-e17e789dd4c0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [b5bb1b9e-b768-4cd4-94f6-e17e789dd4c0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004584791s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-866212 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (73.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m13.021662448s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (73.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-866212 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-866212 --alsologtostderr -v=3: (16.524463362s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-366970 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-366970 --alsologtostderr -v=3: (2.516357813s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-366970 -n newest-cni-366970
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-366970 -n newest-cni-366970: exit status 7 (79.218335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-366970 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-366970 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (11.195623508s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-366970 -n newest-cni-366970
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212: exit status 7 (115.115607ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-866212 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-866212 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-866212 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (51.731507596s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-866212 -n default-k8s-diff-port-866212
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.13s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-366970 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-110992 "pgrep -a kubelet"
I1026 08:32:40.594189   12921 config.go:182] Loaded profile config "auto-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-110992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-8jv4b" [68cabc8e-1dc0-4359-8294-fdb56f462364] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-8jv4b" [68cabc8e-1dc0-4359-8294-fdb56f462364] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.003956038s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (54.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (54.485751551s)
--- PASS: TestNetworkPlugins/group/calico/Start (54.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-110992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (50.628590162s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.63s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wb2rv" [5e8809c7-efee-4872-a2f7-7a72845156a2] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003703874s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-hxqqs" [c9f8f4f2-3683-4a4c-b19b-1758fdfb707d] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.00383288s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wb2rv" [5e8809c7-efee-4872-a2f7-7a72845156a2] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003978474s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-866212 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (6.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-110992 "pgrep -a kubelet"
I1026 08:33:35.864421   12921 config.go:182] Loaded profile config "kindnet-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-110992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ltvgq" [86f0a8b6-154d-46a9-a401-6ff29963b18b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ltvgq" [86f0a8b6-154d-46a9-a401-6ff29963b18b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003419401s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-866212 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-cpmkc" [1f5656db-3a22-4d4a-91db-b6c445a0837f] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003831555s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-110992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (65.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m5.17658009s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (65.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-110992 "pgrep -a kubelet"
I1026 08:33:50.609881   12921 config.go:182] Loaded profile config "calico-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-110992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-ws47x" [16cc84f0-802b-408f-afd9-7b126d7f19c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-ws47x" [16cc84f0-802b-408f-afd9-7b126d7f19c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.003937745s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-110992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-110992 "pgrep -a kubelet"
I1026 08:34:01.038281   12921 config.go:182] Loaded profile config "custom-flannel-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-110992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d22fx" [b9e58b39-f42b-44c0-8ffa-beb62bfd2a62] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d22fx" [b9e58b39-f42b-44c0-8ffa-beb62bfd2a62] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.025897134s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (52.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (52.865940825s)
--- PASS: TestNetworkPlugins/group/flannel/Start (52.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-110992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (70.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-110992 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m10.550853667s)
--- PASS: TestNetworkPlugins/group/bridge/Start (70.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-110992 "pgrep -a kubelet"
E1026 08:34:55.668007   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:34:55.674453   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:34:55.685895   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:34:55.707341   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1026 08:34:55.741879   12921 config.go:182] Loaded profile config "enable-default-cni-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-110992 replace --force -f testdata/netcat-deployment.yaml
E1026 08:34:55.749372   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:34:55.830808   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xtkkv" [9dfc28a9-c04c-4402-8d1c-73ed4dd7948b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1026 08:34:55.993056   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:34:56.314931   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:34:56.957082   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:34:58.238938   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1026 08:34:58.718733   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/addons-610291/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-xtkkv" [9dfc28a9-c04c-4402-8d1c-73ed4dd7948b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003696478s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-npbwk" [5c901cb3-a665-47d1-8a83-9f69909aaa58] Running
E1026 08:35:00.801461   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003289111s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-110992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-110992 "pgrep -a kubelet"
I1026 08:35:05.795050   12921 config.go:182] Loaded profile config "flannel-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-110992 replace --force -f testdata/netcat-deployment.yaml
E1026 08:35:05.923756   12921 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/old-k8s-version-810379/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gtn7z" [213807e3-162e-4c60-ac73-24de00a14056] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gtn7z" [213807e3-162e-4c60-ac73-24de00a14056] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.003995923s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-110992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-110992 "pgrep -a kubelet"
I1026 08:35:32.184578   12921 config.go:182] Loaded profile config "bridge-110992": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-110992 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fc6mf" [824f8e84-46ec-4d3a-a460-c18d95814a15] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fc6mf" [824f8e84-46ec-4d3a-a460-c18d95814a15] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.004004108s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-110992 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-110992 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (26/326)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-209240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-209240
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-110992 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-110992

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-110992

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-110992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-110992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-110992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-110992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-110992

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-110992

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-110992

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-110992

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-110992

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-110992" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-110992" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:26:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-535689
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:27:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-462840
contexts:
- context:
cluster: cert-expiration-535689
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:26:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-535689
name: cert-expiration-535689
- context:
cluster: kubernetes-upgrade-462840
user: kubernetes-upgrade-462840
name: kubernetes-upgrade-462840
current-context: ""
kind: Config
users:
- name: cert-expiration-535689
user:
client-certificate: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/cert-expiration-535689/client.crt
client-key: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/cert-expiration-535689/client.key
- name: kubernetes-upgrade-462840
user:
client-certificate: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/client.crt
client-key: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-110992

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-110992"

                                                
                                                
----------------------- debugLogs end: kubenet-110992 [took: 3.269239975s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-110992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-110992
--- SKIP: TestNetworkPlugins/group/kubenet (3.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-110992 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-110992" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:26:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-535689
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21772-9429/.minikube/ca.crt
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:27:32 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: kubernetes-upgrade-462840
contexts:
- context:
cluster: cert-expiration-535689
extensions:
- extension:
last-update: Sun, 26 Oct 2025 08:26:25 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-535689
name: cert-expiration-535689
- context:
cluster: kubernetes-upgrade-462840
user: kubernetes-upgrade-462840
name: kubernetes-upgrade-462840
current-context: ""
kind: Config
users:
- name: cert-expiration-535689
user:
client-certificate: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/cert-expiration-535689/client.crt
client-key: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/cert-expiration-535689/client.key
- name: kubernetes-upgrade-462840
user:
client-certificate: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/client.crt
client-key: /home/jenkins/minikube-integration/21772-9429/.minikube/profiles/kubernetes-upgrade-462840/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-110992

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-110992" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-110992"

                                                
                                                
----------------------- debugLogs end: cilium-110992 [took: 3.837679679s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-110992" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-110992
--- SKIP: TestNetworkPlugins/group/cilium (4.02s)

                                                
                                    
Copied to clipboard